text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
It is now possible to upgrade Linux Mint 18 to version 18.1. If you’ve been waiting for this I’d like to thank you for your patience. Upgrade for a reason “If it ain’t broke, don’t fix it”. You might want to upgrade to 18.1 because some bug that annoys you is fixed or because you want to get some of the new features. In any case, you should know why you’re upgrading. As excited as we are about 18 to upgrade. Package updates Upgrading to 18.1 will apply all level 1 updates for you. You do not need to apply level 2, 3, 4 or 5 updates to upgrade to the new version of Linux Mint, and doing so won’t apply these for you. Enjoy Upgrading to 18.1 is relatively easy. In the Update Manager, click on the Refresh button to check for any new version of mintupdate and mint-upgrade-info. If there are updates for these packages, apply them. Launch the System Upgrade by clicking on “Edit->Upgrade to Linux Mint 18.1 Serena”. Follow the instructions on the screen. Once the upgrade is finished, reboot your computer. Additional info - Although Linux Mint 18.1 features a newer kernel, this upgrade does not change the kernel on your behalf. This is a decision only you should take. -. Thanks for the update! If I move my pre-existing panel to the side of the screen the applets overlap the side of the panel/hover to the right, no matter how large I resize it. Not a big deal as if I create a new panel and add applets it works fine. Looks like the applets may still be acting as if the panel is horizontal (e.g. clock layout.) I just updated the update manager to 5.1.0.3, still this option under “edit” is unavailable to me. Is it just a matter of time or I’m doing smth. wrong? Unable to update. Did not show anything… I don’t see the option. By the way, where does Linux Mint keep its information about what DE version it is? Mine was Cinnamon, then I switched to MATE by installing mint-meta-mate and removing cinnamon, but I’m not sure if the system fully knows it’s a MATE version now. Edit by Clem: Information about your edition are stored in /etc/linuxmint/info, provided by mint-info-cinnamon or mint-info-mate. @2 – the update will take a while to filter through the various mirrors. For people who don’t see the option: – This is currently only available for Cinnamon and MATE editions – It’s also only available in Mint 18 (if you’re running 17.3, you first need to upgrade to 18, before you can upgrade to 18.1). – The option only shows up after mint-upgrade-info and mintupdate are up to date (if you’re using a mirror, these updates might not be available to you just yet). Cinnamon updated without problems. Adwaita theme bug is fixed. Mint-Y works fine. Now I can see Logitec wireless keyboard and mouse battery status. I’m sure there are a lot not too obvious features and fixes to discover. Two mate boxes are upgrading… Thanks! These instructions worked fine for me, having previously gone from Cinnamon to Mate in an incremental upgrade. Matt Clem: thanks for your note that this only is 18 to 18.1 (maybe you should edit original post) also how about a link to the M17 to M18 upgrade path instructions? thanks! Edit by Clem: Merci bien! Perfect update on my 2000 vintage Pentium M computer. (I did have a problem that the upgrade was not showing up on mintupdate, but I think it was the mirror I was using – jmu-edu – as soon as I changed to default software sources it showed up and worked great.) 18.1 seems to be running even better than 18.0. Hi, I updated mintupdate to 5.1.0.3 (menu Help/About does not show any version information, I had to use dpkg instead!!) but there is no upgrade menu item appearing (still after rebooting). How can I fix this, please? Knut Worked flawlessly and fast (took only a few minutes) on my main laptop. Thank you very much 🙂 THANK YOU SO MUCH 🙂 Just did the upgrade on my desktop. It went very well! I did manually remove Banshee and install Rhythmbox to be in line with the new release. Thanks for the great work! Edit by Clem: Hi Don, yes it’s up to you to choose between Rhythmbox and Banshee. We chose to switch but in the scope of an upgrade this choice is obviously yours. Sigh after install of 18.1 I reboot to a black with cursor screen Edit by Clem: Install or upgrade? The upgrade to Cinnamon 18.1 was smooth, quick and painless! Everything is running beautifully. I really appreciate the wonderful and hard work you folks on the Mint team put into this project. Awesome, thank you Clem and LM team! I updated Linux mint to 18.1 and restart. After. Decrpyting my drive I am stuck in a black screen with only a cursor, help please! Edit by Clem: Do you see the login screen or is this after you log in? That’s extremely quick. Thanks! 🙂 still the little bug with mousetrails on desktop* on my intel nuc5ppyh (cpu => N3700), BUT the update works wonderfull and it feels like Kodi 16.1 runs smoother than ever before on my htpc! thank you so much to all you linux-mint-guys! *linux mint 18.1 Cinnamon Updated, very good. Thank you. Just upgraded my Mint 18 MATE 64 bit installation to 18.1. It was a pleasantly boring experience 🙂 The only thing I noticed was that mate-terminal had reverted to system theme settings for the colours so I had to untick that box to get back to my prefered appearance. A minor detail really. If I notice anything major or significant, I’ll make a post about it. Many thanks and congratultions to Clem and the Linux Mint team as well as “Merry Christmas” to all of you. Newb here using mint 18. If I upgraded from 18 to 18.1… does it erase dats, files, etc? Edit by Clem: Hi Nick, no, it doesn’t delete anything. Upgrade only took a few mins, everything seems to be running great for me. thanks Mint team! I absolutely love how easy this makes the update process. All looking good so far (MATE), but received the following error during the upgrade: … Processing triggers for doc-base (0.10.7) … Processing 47 changed doc-base files… Error in `/usr/share/doc-base/xapian-python3-docs’, line 9: all `Format’ sections are invalid. Note: `install-docs –verbose –check file_name’ may give more details about the above error. Registering documents with scrollkeeper… … Running the following: install-docs –verbose –check /usr/share/doc-base/xapian-python3-docs produced: Warning in `/usr/share/doc-base/xapian-python3-docs’, line 9: file `/usr/share/doc/python3-xapian1.3/index.html’ does not exist. Error in `/usr/share/doc-base/xapian-python3-docs’, line 9: all `Format’ sections are invalid. /usr/share/doc-base/xapian-python3-docs: Fatal error found, the file won’t be registered. Edit by Clem: That’s weird, python3-xapian1.3 didn’t get any update… Linux Mint 18.1 with kernel 4.8.0-30 works fine with my old sony vaio Tx to the team 😉 Hi Clem, thank you for the new update. It looks fine. BUT: I have some trouble with the panels, because I can’t click on the applets with the mouse. The panels seem to be unactivated. Also the Active Corners don’t work. The panels seem to lay over the desktop and to cover it. The fan of my computer is working continuously after the update. Best wishes Andreas Edit by Clem: Hi Andreas, some of the devs in the team are aware of this issue and are looking for a fix. In the meantime try to restart Cinnamon with Alt+F2, enter “r” without the quotes and press Enter. Some serious issues with the panel configuration ATM on Cinnamon. I tried to add some side panels, but ended up having my system not respond at all to any clicks on any of the panels (not even after multiple reboots). Had do downgrade back to 18.0 Edit by Clem: Remove all 3rd party applets, window-list with grouping in particular. Thank you for the article! Please continue with similar articles in the future. Hi everybody, A couple of people reported black screens after the upgrade. I don’t know what the issue could be, but until we find out, here’s a couple of steps to troubleshoot after the upgrade: – Run “inxi -r” to make sure your repositories are pointing to serena and xenial. – Reinstall mint-meta-cinnamon (or mint-meta-mate if you’re using the MATE edition), in case some of the updates weren’t downloaded/installed properly. – Make sure you rebooted after the upgrade. If you’re still faced with a black screen, tell us whether that’s before or after the login screen, and which kernel and drivers you’re using. Also please tell us if you’re using NVIDIA prime, and if that’s the case, check your version of nvidia-prime (apt version nvidia-prime) and ubuntu-drivers-common (apt version ubuntu-drivers-common). Dear Clem, I did the upgrade. The interface is much better, thank you all. However there is a problem with the panels. After I log in, after a while, the panels become unresponsive. In fact they aren’t there. If I click on the panel it just clicks on the desktop, furthermore, for example, if I maximize a window it fills te whole desktop. However the panel is still apparent. I haven’t add any new panel or any new items on the panel, in fact no changes since the upgrade. Do you have any solution? Thanks in advance. Edit by Clem: Hi Bora, try to remove all 3rd party applets/extensions and reset the panel configuration. Also turn off any IM (fcitx, ibus) in case you’re using them, to pinpoint the source of the problem. Just updated and zero problems. 😉 But I see an issue in Update Manager -> Linux Kernels. Kernel order (left column) is weird, instead of ordering older-newer, the order in my computer is 3.19, 4.2, 4.4, 3.13, 3.16, 4.7… 😕 Having updated to LM 18.1, I unfortunately realized that xplayer still has the same rendering problems as it already had in LM 18 beta (see also comment #553 on Linux Mint 18 Sarah Cinnamon – BETA Release). When started, it does not always draw its window completely, in particular the menu and the side bar displaying file information are not drawn completely or not correctly. Xplayer, as it is present now, is not usable. The old Totem 3.10.1 as included in LM17.3 worked fine on all my machines. This strange xplayer phenomena still prevents me from updating my main machine from LM17.3 to LM18. Please repair xplayer. Thanks for the awesome distro once again! A bug that i’d like to confirm is that when upgrading from 18 to 18.1 on a nvidia-optimus laptop, cinnamon crashes once the installation of proprietary drivers is completed, and it starts only in fallback mode. The same happens with a fresh install too (didn’t happen on 18). Installing mate-meta package solves the problem. When you find the solution you could also update the iso! Edit by Clem: hi Nikos, no, mint-meta is installed by default. I think the issue is related to the need to logout once after rebooting. It’s described in the release notes. We’ll try to make MDM do that on login going forward so an initial logout is no longer necessary. Seamless update on my extra machine. I had to check to see if it was actually updated. No problems upgrading to 18.1 Serena! I hope to keep using Linux Mint for a long, long time. @Clem re: post 26 (python3-xapian1.3) – just to confirm that my system was fully up-to-date, as confirmed by sudo apt-get update/upgrade, before the upgrade process was launched. dpkg informs me that I have python3-xapian 1.3.4-0ubunt amd64 installed (this is not something that I did manually, but perhaps it was pulled in by other software). The Update Manager’s history of updates does not list the package as having been updated, either during the upgrade process or since the start of last month. Edit by Clem: Hi Pedro. “apt policy python3-xapian1.3” should tell you what version you have and what versions are available, and where they come from. If this is an issue from a 3rd party source or a PPA, remove it, and use the Software Sources to downgrade foreign packages.. @Clem re: post 38. python3-xapian1.3: Installed: 1.3.4-0ubuntu1 Candidate: 1.3.4-0ubuntu1 Version table: *** 1.3.4-0ubuntu1 500 500 xenial/universe amd64 Packages 100 /var/lib/dpkg/status I’m no expert but that looks good to me, so no action taken. It still doesn’t explain the original error during the upgrade process, but I can look further into that. @Clem @32 Problem solved by removing applets and resetting the panel configuration. Thank you very much. By the way really a great distro and a very friendly community. I really love to use, study and do things on it. I could never get 18 to display properly on my Lenovo laptop, but are please and relieved to say that 18.1 is working well. I see no menu entry for the update. Is this supposed to work for a 18.0 system that was upgraded from 17.x? Edit by Clem: Yes, it should. Hi, when I’m trying upgrade i get the following error: Preconfiguring packages … Setting up install-info (6.1.0.dfsg.1-5) … /usr/sbin/update-info-dir: 2: /etc/environment: -e: not found dpkg: error processing package install-info (–configure): subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: install-info E: Sub-process /usr/bin/dpkg returned an error code (1) Now i cant upgrade to Mint 18.1 nor install anything via CLI or GUI, i always get the same error. How can i fix this? Thank you. Edit by Clem: you should have an /etc/environment… with this inside of it: PATH=”/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/u sr/local/games”. What happens if you have Mint 18.1 Beta installed. Does it automatically update to the final release with updates? Thanks. Edit by Clem: Its Update Manager provides all the updates contained in the stable version yes. Upgrade went great. Using kernel 4.8.8. Thanks! Clem, Beautiful job as always! Thanks for all the great work you and your team do! Best OS in existence! One question: Is there any way to change the panel menu open/close effect and theme to the previous design? It’s not a big deal if not, but to me the new open/close effect is very frantic and harsh. I really enjoyed the more gentle effect from before, where it faded in, nearly in-place, but with a slight upward drift motion. Then, it closed by floating slightly upward as it faded out. It was much more classy feeling to me. Is that a theme-specific thing that could be changed with a different theme, or is the old way just gone now? Edit by Clem: The boxpointers are gone for good. The animation can be disabled/enabled in the menu preferences. @Pedro Thanks for the tip about Grub! I had the same issue with it showing the old version. Your fix did the trick. Do you know if there are any other parts of the system that also may not have changed the version? First, I just upgraded my virtualbox install. Grub menu still shows “Mint 18” after upgrading and doing update-grub. Secondly, I have to say I do not agree with the “don’t upgrade unless there is something specific you need today” philosophy. I heartily recommend upgrading early, and keeping the system as up-to-date as possible. For the following reasons: – Upgrade issues happen, so it’s better that the upgrade system is experienced by as many people as possible exposing any fringe case issues that can be dealt with to make the system more robust. – It is only a matter of time before you will want to install something that has dependencies on something in the new OS version and you will be forced to upgrade. Thing is, you have less software and custom configurations on your system today than you will have tomorrow. An upgrade today is simpler and more likely to succeed than the one tomorrow will be. – Security updates happen more quickly on current software versions than older ones. Don’t let bit rot set in. Upgrade now and keep your system fresh. Hi, thank you, very easy and smooth update! I’ve just figured out one little annoying thing: the keyboard layout applet shows flags and there is no option to switch to textual representation. 🙂 Flags are too distinctive… Why is this option missing? Is there any way to switch to textual representation? Edit by Clem: Go to System Settings -> Keyboard -> Layouts, there’s actually more options than before. Awesome!, 18.1 Serena is performing really good! Thanks to the LM team for their great effort. @ 39. Pedro _and_ Clem et al… I had the same issue with Linux Mint 18 (instead of 18.1) being displayed in the grub menu on three machines I upgraded today to 18.1 MATE 64-bit after reboot(s). Thus, I also had to do the ‘/etc/grub.d/10_linux’ hack. BTW, this was _not_ just an issue with the earlier 17.x KDE upgrade(s). FWIW, see here: . Beside the hopeless warning from Clem not to upgrade if everything works, yeah right, OF COURSE I went ahead and update 2 machines (Cinnamon with one Nvidia and one Radeon). I did made a Macrium Reflect snapshot before though. No regressions observed only an expected older theme menu break. Not so dramatic. Congrats for the extremely well done and professional looking update walk through.18.1 follow the same update procedure of version 17 to 17.3. Minor but solid improvements. Will advise if something comes up. The only bugs still there, is Clementine display corruption (don’t use it anyway) and the Nvidia 304.132 drivers which Nvidia + Ubuntu hasn’t not fix yet. Thanks to the Mint Team. Nice progression. Sorry for the late reply Clem, it is an upgrade and I have set it to auto login. Tried the nomodeset in your release notes but didn’t work @Pedro says: December 18th, 2016 at 8:26 pm. Thank you,Pedro. I got the same problem as you. Thanks again for your solution! during install it asks if you want to replace the lsb-release with the new one or not.. I would suggest saying YES as otherwise you will have incorrect release info. The default is NO :-/ P.S. If you press D to get the differences the install process locks up. P.P.S. a “sudo dkpg -configure -a” fixed it after pressing ‘D’, i.e. I could now press ‘Y’ Thank you & happy holidays! Mint 18.1 have LibreOffice 5.1.4. Mint 18 have LibreOffice 5.1.2. If I upgrade from 18 to 18.1 using update manager, will it also upgrade LibreOffice? And will it also replace the default music player to Rhythmbox? Edit by Clem: It won’t replace rhythmbox for you and it won’t upgrade LO (note that both versions are available in both releases). sounds good, still waiting for XFCE DE… installing XFCE DE on Mint 18.1 Mate only cause problem @Neal (48): No, that’s the only problem I found. Enjoy! @Clem (40 et al): Sorry for having distracted you with this. Part of the upgrade process obviously ran a check on the doc-base files, and it was only bad luck for me that I had python3-xapian1.3 installed. A look at the Ubuntu amd64 package at shows that they have the package folders in a mess, containing both /usr/share/doc/python-xapian3 and python3-xapian3 folders, only the former of which has index.html, but a doc-base file that points at a non-existent index.html in the latter. Hence the error. My PC have a problem after update to cinnamon18.1. When I upgraded my computer system, start the screensaver, when I want to unlock the screen and let it work, I make sure that my password is correct, but the screen saver in enter after failed to enter the desktop. Enter the user option only after clicking the header icon behind the input box to enter the desktop. How to set normal?Please help me?Thank you very much!!! I can tell you from a first-hand experience that the Linux Mint 18 to 18.1 update process was infinity less painful and tedious than the installation of the Windows 10 Anniversary Update! Hi all, Ran the update manager last evening in Mint 18 to do the update to 18.1 and now update manager has disappeared from the bottom task bar. Trying to run it from Menu (it’s still there), doesn’t open it. Running on Toshiba Satellite L840 laptop. No previous problems like this from Mint 16 onwards. Any assistance appreciated. Many thanks Colin S Edit by Clem: From a terminal, try: “apt update”, then “apt install mintupdate mint-upgrade-info”. Hello team, Thanks alot for your work, i was as a little children who wait his gifts… 🙂 It’s a good gift for christmas. Have a nice holidays. @Colin (56): You could try running mintupdate from the command line and see if any errors are produced. I have the same issues with the panel as described in post #32. It seems to be related with 3rd party applets, “Window List with App Grouping” in particular. After removing the applet, the panel works fine. After readding, the problem occured again. Thanks Pedro… I had to re-install mintupdate from the command line and then all went well… updated OK. Thanks Colin S Just Upgraded from 18.0 to 18.1 and got a massive Problem. The Taskbar is unresponsive. The system boots up, everything works, even the taskbar but after around 1-2 minutes the taskbar is unresponsive. Like my mouse cursor isn’t detected. Can’t click anything, but using the “Windows Key” on my Keyboard still brings up the Startmenu where the mouse cursor works. But not in the actual taskbar! Got the taskbar working, by disabling the third party applet “Window List with App Grouping”. Problem: I really dislike Mints standard Taskbar, to old, to unclear. I would like an official Taskbar like the one in Windows 7 and above. After the update to 18.1, the most of the menus, instead of black are a kind of ugly gray. From all i have, only Modern Mint Dark, and Windows 10 Dark are black. Changing the width of Windows 10 Dark means to go in the Panel Edit mode. After that is not possible to go out from the edit mode, and it works only after a restart. First 3 pictures are from LM 18.1 and the last one from LM 18 where the menu is black. Edit by Clem: These themes need to be updated for Cinnamon 3.2. We’ll get involved in their maintenance at some stage during the cycle. Just upgraded. Took less than 10 minutes. Thanks Clem and the Mint Team. Everything seems to be working fine. I did, however, observed the following 1) Xplayer: When clicked a video file from Nemo, Xplayer opened initially without menu. Only after toggling to full screen mode and back, then the menu appeared. Also I do not see any option to download subtitles. One request for XPlayer; please include a shortcut key to toggle subtitle on and off. This would really make it convenient instead of having to go through the menu each time to turn subtitle on and off. Thanks. 2) Update Manager/Kernels Window: The left pane showing main kernel version is not sorted properly. It is showing 4.4, 4.8, 4.3, 4.5 in this order. Under 4.4, I can see 4.4.0-53 recommended for security but no recommendation for stability. Once again a million thanks for yet another release of this very pleasant distro. Merry Christmas and have a wonderful holiday. I forgot to mention that the panels are kiping their original color. Updated with no problem at all I am running 18.1 as dual boot with 17.3. I am into DVB Satellite and have 3 machines running pciE DVB tuner cards. I honestly cant see any reason to upgrade to 18 or 18.1 as I am finding that the 18s cant display vdpau video very well The fitted cheapo DVB dual tuner that runs perfectly in 17.3 and i can view 2 channels in separate windows using any combination of VLC, Xine and Kaffeine in 18 an 18.1 the main instance is Kaffeine , it runs blocky and slow when I try to run two instances… it barely copes with only one instance A mixture of Kaffeine and Xine and even two instances of VLC tuned to the DVB cards gives same very poor results… so poor 18 and 18.1 are both useless for me Good old 17.5 can run both tuners and display streamed networked DVB from two other cards in other machines on my LAN .. with NO problem. so sorry the upgrade is pointless for me oops …17.3 not .5 …..lol Thanks to the whole team for all the hard work. The upgrade was flawless. This was a great holiday present 🙂 Hi all, @Bora, I have a same panel freeze since updated to 18.1. It was “Window List with App Grouping 2.7.5” applet on the pannel. My dropbox icon disappeared from the panel, even when adding Dropbox to startup-applications with a delay of 5 seconds. (this solved this problem in a previous version). However, Dropbox is functioning normally. Where will be available upgrade to Mint 18.1 XFCE? Help after upgrade to mint 18.1, my system can’t seem to shutdown properply. How to fix it? Hi Installed 18.1 yesterday from scratch. All god exept for the SMB Sharing. Sharing a Folder from Nemo still not working correctly same like in Cinnamon 18. Thanks Bonjour. Est-ce que la 18.1 est a une version 64bit en français. merci This upgrade screwed up my themes. Time to go back to Windows 10. I can confirm the information of #67. I love this new screensaver. Is there any way to set it as my default login screen? <3 Edit by Clem: Sorry no, it’s not a display manager, it cannot handle the login screen. update worked perfectly on both of my Laptops (one with kernel 4.4.0-53 and the other one with 4.8.0-30). Great work!! Just upgraded my tower Mint 18 -> Mint 18.1 But system info says that hard drive is 151.8 Gb…it doesn’t seems to be accurate because none of the disks or partitions has that size. Boot disk is a 120Gb SSD and I’ve a 16 Gb swap on another disk. Tx to the team for the job anyway 😉 I just updated from Sarah to Serena (Mate) in less than 10 minutes. Haven’t found any problems yet. What a very seasonal gift to us. Thanks a bunch and Merry Christmas! just updated cinnamon edition 32 bit succesfuly only one issue with cinnamon screensaver whenever screensaver is activated and i enter my password unable to unlock screen pressing enter or the unlock icon using my cursor both are not working and only way to unlock my screen is by clicking on users icon in the screensaver going to MDM and entering password again there and entering it..although at the moment i have disabled screensaver and no problem at all any suggestion for fixing it is welcome..Thanks a lot for a amazing product guys.. Edit by Clem: Hi Roy, create a github issue for this if you can. Kill the screensaver daemon and run it from the terminal to see if you can catch any error messages (killall cinnamon-screensaver; cinnamon-screensaver), lock the screen again, go to MDM, unlock, and see if there are errors in the terminal. I just upgrade from Sarah to Serena (cinnamon) in less than 10 Minutes. There are no problems with the update. Thanks très bon boulot ! il y a moins bien, mais c’est plus cher ! very good job ! worse exists, but more expensive. Hope this will fix the (extremely annoying) bug of WiFi not working half the time. This is (was?) an upstream issue of Ubuntu. Great work guys! P.S. Why do i want to upgrade my grub? Upgraded from 18 and also upgraded kernel to 4.8.0-30. VirtualBox dkms can’t generate the necessary modules, so I can’t use my virtual machines. On kernel 4.4.0-53 it works fine. Edit Clem: That’s usually the case with Ubuntu kernel series, they take a while to gain dkms compatibility with various modules.. virtualbox isn’t compatible in xenial with kernel 4.8.x just yet. Thank you very much, Mint Team! Updated today from 18 to 18.1. No problems as always! Best Christmas gift for this year! Keep on doing such a good job. Have fun! Just received a huge regular update for Mint 18 KDE. Plasma looks different now, in my honest opinion more ugly, too minimalistic. Main menu icons without any colors!!! How to check my Mint/Plasma version? Edit by Clem: with kinfocenter. On this page, you ask to read On, there’s a link toward for playing videos with DRM. But this article seems to be obsolete now, since Chrome and Firefox can read videos with DRM. Edit by Clem: I’m not sure that’s true.. you’re talking about the recent update to Flash 24? if so, afaik, it still doesn’t include DRM support. @76 You can downgrade to 4.4.0-53 kernel. Did you generate the modules with : sudo dpkg-reconfigure virtualbox-dkms sudo dpkg-reconfigure virtualbox And loaded them with : sudo modprobe vboxdrv sudo modprobe vboxnetflt ? Arc Theme is broken. Going back to old version. Edit by Clem: There should be no difference in support for it between 18 and 18.1. Upgrade done without any problem. Thanks Clem. For those interested, after the update, I had a problem like many (new Cinnamon 3.2) with my cinnamon menu theming and a nice guy at gnome-look point me to this large collection of Mint-Y colour variations which have been updated to cinnamon 3.2. Check it out. github.com/erikdubois/Mint-Y-Colora-Theme-Collection Edit by Clem: nice 🙂 Smoothest upgrade experience ever!!! Thanks The mechanics of the upgrade went smooth (cinnamon 18.1, from 18). The one problem I encountered was I couldn’t reach my DSL modem. My workaround at the moment is to manually set the IPv4 info, which allowed it to connect immediately. The symptom was timeouts. After turning the network off and on it would sometimes connect to the modem. If it managed to connect the connection was then solid until a reboot or I played with it. Then back to timeouts and stopping and starting the network. This was for both wired and wifi. I upgraded from 18.1 Beta using the instructions given in a separate post. Absolutely no problems and I updated all levels (including the kernel). Although I did not try this in Sarah or 18.1 Beta, I notice that when I use the menu option to Restart the system, it takes a long time to complete the shutdown process (~ 20-30 seconds) before starting to boot up again. I tried: 1. sudo reboot – no problem (shutdown happens in about 5 seconds) 2. shutdown via menu – no problem Not sure if this is a 18.1 problem, but is very easy to test. Hope it will be easy to fix? Anyway, this is a great release! Thanks very much, team! dpkg issue. Update to 18.1 stalled out after dpkg starts. Stopped the upgrade. Tried to run dpkg –configure -a from the terminal. dpkg stalled again at the same spot. update-initramfs:Generating /boot/initrd.img-4.4.053-generic Warning: No support for locale: en_US.utf8 [] I have seen this warning many times before but have never had a stall. My computer will not reboot or shutdown (possibly a good thing at this point). As always, thank you so much for your work. Edit by Clem: What does “locale” say? Hello. How can I up date my lunux mint 64bits using the terminal? Edit by Clem: You can’t perform the same upgrade from the terminal, but you can “fully” upgrade it using APT. You first replace all occurences of sarah with serena in /etc/apt/sources.list and /etc/apt/sources.list.d/*, then refresh the cache with “apt update”, then perform a full upgrade with “apt dist-upgrade”. Be aware that this will do more than just upgrade you to 18.1 though, it will apply all available updates as well, including kernel changes. Have been trying out the new features in 18.1 for a day now. So far no major issue. Love the new feature in Nemo that enable double mouse click to go to parent directory. Did not expect it to be useful to me. But it did. Also love the new search bar at bottom of Xed. This makes more sense really. But I did found one minor issue with the screensaver. Coming out from suspend, it tells me that I have 3 notifications when in fact I had none. Another minor issue, but this is not just with Cinnamon 3.2; it was already there with previous versions of Cinnamon when LM17 point releases was the latest. I did not report it because it was so glaring that I thought everybody, including the developer would have picked up on it easily. But it’s still there in 18.1. Anyway, I have my panel at the top of the screen (the only panel on the only monitor – using a notebook here with no external monitor). Whenever I open an application for first time, say Nemo or terminal, and if I minimize the window, the window effect shows the window shrinking to the bottom instead of to the top panel. Subsequent minimization of the same window will show the correct window effect: shrinking to the top panel. Not sure why this is so. Edit by Clem: Please create an issue for this on @97 I can read videos on Netflix with Google Chrome. I did not test with last version od Firefox but it should work after activating “Readind DRM content” in the parameters. very simple steps yu gave us for installation. thanks for sharing the post with us. great work indeed. keep it up. Thanks Clem, Just upgraded my Acer Aspire 5222. No issues, all done in 10 Min or less. Happy Christmas! mintupdate updated to 5.1.0.4 in Sarah: Now updates grub and fixes the issue of the grub menu staying at 18.0. Will this upgrade option via Update Manager also be available on 17.3 systems? I hoped that it would come with one of the latest daily updates, but nothing here so far… Edit by Clem: You can upgrade 17.3 to 18 with this tutorial. And then from 18 to 18.1 with the steps described here. From #105 LANG=en_US.UTF-8 LANGUAGE== Edit by Clem: Hi Jeffrey, your locale looks good. It must just be a warning and the cause of the issue must be somewhere else.. Clem, just submitted the issues on github. Thanks. Help!!! After I’m Upgrading my system from LM 18 to 18.1 my system can’t seem to shutdown or restart properly without log off first. Can somebody help me to fix the problem? Note with 18.1. The latest virtualbox (5.1.10) from their website (not the repo – 5.0.24) seems to work with kernel 4.8. Not sure if it will maintain compatibility with vm machines already installed. I install 5.1 using kernel 4.4, then update to 4.8. Virtualbox complained and ask to run vboxconfig. No errors reported and it seems fine now. Smooth, trouble-free updating in less than 20 minutes. Great job! I’m not sure if this is the place for it but I’ll give it a go. One of my machines is an Acer Z1-601. With both Mint and Ubuntu I’ve a serious problem with the program locking up. I’ve switched off hardware acceleration in my browsers but the problem persists. Any suggestions on how to fix this nuisance? @Clem Here is my “/etc/environment”: PATH=”/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games” -e LC_CTYPE=en_US.UTF-8 LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 I don’t think that’s wrong… any other solution for the problem I described above? Sorry the double post but I just found a solution: Just use this command: sudo mv /var/lib/dpkg/info/install-info.postinst /var/lib/dpkg/info/install-info.postinst.bad Clem, I wanted a current list of extra software that I had installed, BUT the “Backup Tool” crashes at the point when I ask it to make the list of software titles. I have entered my password to run the BackupTool and selected a valid destination, then I click on “Forward” and it crashes a tenth of a second after it blinks up the gui with software listed. BTW, I could not use the BackupTool to restore my software selection after updating from 17.3 to 18. I reported that and I have heard no response. So, I wrote my own script to add the additional software. Is anyone maintaining the Backup Tool????? BACKGROUND: I decided to update to 18.1 from 18. Before doing that, I decided to backup my own files with rsync which includes a list of all the extra software that I have installed. The last time that I did this was in July, 2016 and the software-list’s file size is over 1700 lines. Thanks, Ralph Edit by Clem: Hi Ralph, can you run it from the command line and see if you can catch any errors? The command is “mintbackup”. @107 @ttjimera Same 3 notificiations here Clem, (in response to your edit on #47) Thanks for the info. I do like having animated menus, so I won’t turn it off. I just have a personal preference for the style of animation used before. I like my desktop animations to be subtle and organic, smooth and gentle. I also change the speeds on all cinnamon animations to 300-400ms after all Mint/Cinnamon upgrades, as well. Just looks nicer to me. Perhaps in future versions, could we see different animation customizations available instead of just on and off? Possibly be able to get the same animation as the old “boxpointers” used? (I didn’t care for the boxpointer menu shape itself, but loved the animation style, and would love some customization options to get a similar result.) Same problem as afilla r after having upgraded to 18.1: shutdown is not working any more, the desktop freezes instead. Current workaround is to log out first, from there the shutdown is working. Hi Mint Team ! I want ask how full privacy mint18.1 gives, what need to know about that ? What need to do getting full privacy ? Better than windows 7,8,10 etc ? Thanks ! Im just curious becouse this is important. good JOB! Pozdrowienia z polski, Mint Team! Zaktualizowałem mint18 to 18.1. Bez problemowo jak zawsze ! Update to 105 and 113 this appears to be a problem with -53 kernel. I installed the -57 kernel and I do not have any more problems. thank you again for all you do Just to point out, similar to #119 (@Ralph), the backup tool crashed for me too! Edit by Clem: Hi Varun, please run “mintbackup” from the terminal and see if you can get an error or stack trace for us. In Nemo, double-click action to goto parent directory is not working 🙁 Any suggestion for that? Thanks! Edit by Clem: Afaik, it only works in icon view. Hi I tried to upgrade my 18 Sarah and I get the following issue: Reading package lists… Done Building dependency tree Reading state information… Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies: cpp-5 : Depends: gcc-5-base (= 5.4.0-6ubuntu1~16.04.2) but 5.4.0-6ubuntu1~16.04.4 is installed g++-5 : Depends: libstdc++-5-dev (= 5.4.0-6ubuntu1~16.04.4) but 5.4.0-6ubuntu1~16.04.2 is installed gcc-5 : Depends: cpp-5 (= 5.4.0-6ubuntu1~16.04.4) but 5.4.0-6ubuntu1~16.04.2 is installed Depends: libgcc-5-dev (= 5.4.0-6ubuntu1~16.04.4) but 5.4.0-6ubuntu1~16.04.2 is installed libgcc-5-dev : Depends: gcc-5-base (= 5.4.0-6ubuntu1~16.04.2) but 5.4.0-6ubuntu1~16.04.4 is installed libobjc-5-dev : Depends: libgcc-5-dev (= 5.4.0-6ubuntu1~16.04.4) but 5.4.0-6ubuntu1~16.04.2 is installed libpython2.7 : Depends: libpython2.7-stdlib (= 2.7.12-1ubuntu0~16.04.1) but 2.7.12-1~16.04 is installed libstdc++-5-dev : Depends: gcc-5-base (= 5.4.0-6ubuntu1~16.04.2) but 5.4.0-6ubuntu1~16.04.4 is installed libstdc++6 : Breaks: libstdc++6:i386 (!= 5.4.0-6ubuntu1~16.04.4) but 5.4.0-6ubuntu1~16.04.2 is installed libstdc++6:i386 : Depends: gcc-5-base:i386 (= 5.4.0-6ubuntu1~16.04.2) but 5.4.0-6ubuntu1~16.04.4 is installed Breaks: libstdc++6 (!= 5.4.0-6ubuntu1~16.04.2) but 5.4.0-6ubuntu1~16.04.4 is installed python2.7 : Depends: libpython2.7-stdlib (= 2.7.12-1ubuntu0~16.04.1) but 2.7.12-1~16.04 is installed python2.7-minimal : Depends: libpython2.7-minimal (= 2.7.12-1ubuntu0~16.04.1) but 2.7.12-1~16.04 is installed E: Unmet dependencies. Try using -f. Anyone has any idea? The details of the issue can be found at: Looks like I posted to wrong place for the issue. And here is the new link to issue details: Edit by Clem: Hi Gelin, check your DNS settings and make sure you can reach the Ubuntu repositories first. It looks like security.ubuntu.com isn’t accessible for some reason. Once that’s done, refresh the APT cache with “apt update”. Started the upgrade and it was taking much longer than expected. Then noticed that the updater was fetching lists (and later files) from sources different from usual. Checking the “Software Sources” showed that the updater had changed these from the local mirrors. Changing back and trying again had the same issue. Is there a reason the updater does not honor the local mirrors set up in “Software Sources” and changes these? Plus would they be changed back after the update? Have quite a long time to wait before I will find out myself….. Eventually not as long as expected, yet much longer than from local mirrors. One note on consistency (not really for older users, but perhaps for people moving to Mint), after updating, the message indicates “reboot”, however the option in the Mint Menu is “restart” M The issue has been fixed. Most probably because the security.ubuntu.com now resolved to a correct ip address. (See According to,) However when I am following “Upgrade to Linux mint 18.1 Serena”, in the end of the course when I clicking on “Install mint-meta-cinnamon”, I get an error dialog says “Could not apply changes! Fix broken packages first.” Edit by Clem: apt install -f, reinstall the meta.. apt install mint-meta-cinnamon, reinstall any packages you previously removed. WTF? I write that it was a smooth and problem free update and that it was done in less than 20 minutes. I mention a problem I’m having with one particular machine and you delete the comment? Why? Edit by Clem: Hi, sorry for the confusion, we use the Akismet antispam system. It filters a lot of SPAM for us and only makes a few rare mistakes, but it also requires manual moderation sometimes, especially for people who post their first comment. Your comment was simply awaiting moderation. #124 (double-click to parent) Did you active it in Nemo-preferences-behavior ? Hi Clem, just wonder if with 18.1 the auto login issue has been solved. I am very happy with LM 18 but I still get the login window about 60 % of the times even though it’s set to automatic. It’s a minor bug and I am not sure if it is worth the upgrade. Merry Christmas to you all Giorgio Edit by Clem: Hi Giorgio, no I don’t think so. I can’t say for sure because we don’t know the cause and it seems to be a race condition so it happens randomly. Fixes went into MDM for 18.1 but I don’t think they would impact this… they might… but my intuition is that they don’t. For people who can’t shutdown the computer properly (and who are running in 64-bit), can you try the following versions of MDM? Note that after you install a new version of MDM, the old one is still running, so reboot the computer (“sudo reboot” always works) and only then can you start testing the new version of MDM. Try a few times, both versions, and let us know if any of them fix the issue for good for you. Don’t hesitate to describe the issue in details too, and then last but not least, reinstall, reboot again, and confirm that the issue is back. Thanks for your help. With your feedback we’ll hopefully get a fix on the way. After upgrading to 18.1 from Mint 18, my system is not shutdown properply.I click on Restart or Shutdown button, then all desktop icons, but nothing happens. After 2 minutes this message appears: “INFO: task Xorg:1530 blocked for more than 120 seconds. Not tainted 4.4.0-47-generic #68-Ubuntu INFO: task plymouthd:11400 blocked for more than 120 seconds. Not tainted 4.4.0-47-generic #68-Ubuntu Any ideas? Edit by Clem: Hi Carlos, could you join the IRC if you’re available and I can troubleshoot with you. Hi Clem and Linux Mint Team, first of all I cannot thank you enough for all the great work you have done with Linux Mint. You made me a very happy user for so many years already! One little issue I’ve noticed after the upgrade to 18.1 is that double-clicking in Nemo does not seem to work, neither in icon view nor in any other. Nemo version is 3.2.2. Thanks again! Marcus @clem – thanks for letting us know. Will wait for implementing this action across all views. And will run the mintbackup tool and let you know. @marlenejo – yes! But as clem pointed out, this double-click action works only in icon-view; and I have set the view in “list-view”. And thanks for pointing to those lovely mint-y themes. Got the blue colour 🙂 I upgraded to 8.1 without any errors. However some of the Fn key shortcuts that used to work no longer do. Brightness up/down and screen on/off worked with 18 but not 18.1. Just updated from LM18 to LM18.1, needed to re-install PulseAudio since the volume applet did not show on taskbar. Also noticed that HPLIP is not currently supported on LM18.1 Good!!! Funciona de maravillas la actualización. Updating to 18.1 was smooth and problem free for me in all of my machines. And was completed in less than 20 minutes. Kudos to everyone involved. I do have one problem with one machine, an Acer Z1-601. With both Mint 18 and 18.1 the program freezes. Nothing works after this and I am forced to cut the power and reboot. I read that it might have something to do with hardware acceleration in the browsers. I’ve switched hardware acceleration off but to no avail. The program still freezes without warning. I don’t recall this happening when not using a browser. So… any suggestions on a possible solution that doesn’t involve throwing the POS Acer Z1-601 in the garbage bin? Edit by Clem: Hi, it’s hard to troubleshoot freezes because you can’t access the machine to troubleshoot. Does it freeze completely? i.e. you can’t move the mouse anymore? Can you access the console from CTRL+ALT+F2? If so, check the list of processes, top, etc.. and dmesg for clues. Yes, Clem, it freezes up entirely. No mouse, no keyboard, nothing. Just this one machine. I’ve an Acer laptop, a HP desktop, and two HP laptops running 17.3 and 18 and haven’t experienced anything like this in any of the other machines. This doesn’t happen using Windows. I tried Ubuntu 16.04 and the same thing happens. So it does appear to be something unique to Linux. Edit by Clem: Try different kernels on it. Or even try to run Mint 17.3 on it, and if it works you could upgrade to 18 and keep the 3.13 kernel. @marlenejo (#136) thanks for that. Worked for me (see #140). Thanks, Marcus Hi, I upgraded from Cinnamon 18 to 18.1 and I do like the new screensaver. While I can pause the track and change the volume when running Clementine (from the repository, version 1.2.3), it is not possible to skip to the next track or go back to the previous one. Thanks for the reply @clem #90 As you suggested i have created a github issue. My update manager is only listing Preferences, Software Sources, and Update Policy. It is not allowing me to Update to 18.1 It tells me my system is up to date. Is there another way to update to Linux mint 18.1 using the terminal or getting the update to linux mint 18.1 Such a simple upgrade, never had to remove ppas or anything, and all I had to do was upgrade, then reboot, just like a kernel upgrade. That said though,there’s 2 main problems still: 1. Why 4.4.0-31 being the default? Dirty COW is still a problem in that release. O_o 2. Why does it tell me to install a older Linux metapackage version? I know it’s probably for the Mint guys that didn’t upgrade kernels, but some do, damnit. 😛 Edit by Clem: If you’re running 4.4.0-31, the Update Manager should show you 4.4.0-53 and 4.4.0-57 right now. Regarding MATE, there are still two niggly system panel bugs in Mint 18.1 which I hoped would have been resolved with the update to MATE 1.16: (i) On wakeup from suspend, the main system panel often has elements blacked out, some of which recover after mouseover but others which don’t. The only solution I’ve found so far for this is to run sudo pkill mate-panel. Sometimes this has to be run twice because the first time it doesn’t load the Mint Menu button (and in fact asks if the user wants to delete it from the system panel). Wakeup on suspend also used to sometimes result in an invisible cursor arrow in Mint 18, but I haven’t yet seen this in Mint 18.1 (ii) The bug is still a problem, whereby the icons of some qt applications are positioned vertically at the top of the system panel, sometimes displaying a black rectangle beneath them. The only way I’ve found to resolve this is to increase the size of the panel by a pixel and then reduce it again. Two blights on an otherwise sunny Mint/MATE landscape. @Clem Regarding the shutdown problems: – installed mdm_2.0.16-exit-on-sighup_amd64.deb and rebooted => shutdown is working properly – installed mdm_2.0.16-quick-exit_amd64.deb and rebooted => shutdown not working – reinstalled mdm package from software manager and rebooted => shutdown not working Edit by Clem: OK… could you reinstall the serena version and activate the debug option (in the Login Window preferences, i.e. mdmsetup). Then reboot, log in, and reproduce the failure to shutdown. At that stage I’d like to see the output of “grep mainloop_sig_callback /var/log/syslog”. I’ve also discovered that often (but not always), when the screensaver kicks in, or after waking from a suspend, I cannot enter my password. The mouse seems to work (pointer moving around the screen), and I can get to a terminal through ctrl-alt-f1, etc., and pkill -u username just fine and after that I can login normally again. But the screensaver doesn’t seem to respond at all once it’s activated when this occurs. This is with the 64-bit Cinnamon edition of Mint 18.1. Also, this doesn’t seem to happen if I lock the screen manually. Only when I’ve been afk for too long and the screensaver automatically activates, or if I suspend. Sorry for posting so much, but… I’ve also noticed that after I’ve killed my user account to get passed the frozen screensaver, my CPU usage jumps way up — after two kills I’m getting 80% or higher on all four CPU cores. It only goes away if I reboot. I’ve had to reboot to resolve my bogged down system quite often since I’ve upgraded to 18.1. I looked at the running processes (all processes, not just my user processes), and nothing in that list is adding up to more than 15% CPU usage or so. But it’s still saying 80%+ on the System Monitor “Resources” tab. Somethings seriously wrong here. Edit by Clem: Hi Mike, we don’t have a fix for the keyboard issue yet, but we have one for the CPU usage. We’re testing it now before pushing an update. Uodate my mint 18 to mint 18.1 before 2 days. All work perfect. Thank you very much Me BAD. I did it the old way. Update everything. Then sudo edit apt sources files and change all sarah entries to serena. Save. refresh and update. Then reboot. I did this as soon as the Serena repos were live. All has and is working good after the update. Re: 152 (ii): It looks as if the MATE bug might be to do with incorrect handling of SVG icons in the notifications panel. The author of CopyQ, one of the applications involved, has just released a beta which now also provides PNG icons (it previously only included SVG icons and he suspected that this might be causing the problem) – it resolves the issue. Thanks for this easy way to upgrade. Just a note or two on some bugs I found: 1. Update Manager –> View –> Linux kernels When you select a kernel and press “changelog”, the browser opens the wrong URL: The correct path to the changelog is: 2. Some dkms packages don’t compile anymore under kernel 4.8.0-30, and 4.8.0-32 makes things worse: Under 4.8.0-30: ndiswrapper, virtualbox, virtualbox-guest Under 4.8.0-32: ndiswrapper, virtualbox, virtualbox-guest, rtl8812AU_8821AU_linux The rtl8812AU driver may have been implemented into the kernel (see rtl8xxxu references in changelog), bot the others should probably be supported. Edit by Clem: Thanks, we’ll fix point 1 in the next update. For point 2, it’s usually the case, 4.8 is recent in xenial. When Ubuntu backports a kernel series to an LTS it takes time before they support the various DKMS drivers for it. @Clem no log entries containing “mainloop_sig_callback” are being written to /var/log/syslog when I trigger a shutdown. However, mdm writes some other log entries when I perform a shutdown, see Edit by Clem: Hi Wolfgang, did you enable the debug option in the MDM preferences? what do you mean by “trigger” vs “perform” a shutdown? Clem, I’m responding to your post #138 on shutdown issue. I upgraded 4 computers from Mint Cinnamon 18.0 to Mint Cinnamon 18.1 via Update Mgr tool. 3 were desktops and upgraded fine – 4th is laptop Lenovo G770 with Intel i52410M cpu with integrated graphics (active) plus additional Radeon HD 6630M (inactive). This laptop will not shut down or restart in Mint 18.1 due to freeze shortly after issuing command. Manual reset must be used to turn machine off. I verified this exists also for both Mint Cinnamon 18.1 Live installation disks (64-bit & 32-bit) when shutting down or restarting prior to installation of v.18.1. I performed a series of tests with the 3 versions of mdm as you requested: 1. mdm #1 – – – the exit-on-sighup version 2. mdm #2 – – – the quick-exit version 3. mdm #3 – – – the original version per your download link I installed each via sudo dpkg -i (name of mdm .deb file) Test Results: 1. Replace existing mdm from upgrade with mdm #1. Boot into mdm #1 and issue commands via Shutdown app added to Panel: Shutdown —————–Success Restart ——————Success 2. Replace mdm #1 with mdm #2, same procedure: Shutdown——————Failure Restart——————-Failure 3. Replace mdm #2 with mdm #1 again, same procedure: Shutdown——————Failure Restart——————-Success Shutdown——————Success Shutdown——————Success Restart——————-Success 4. Replace mdm #1 with original mdm (#3), same procedure Shutdown——————Failure Restart——————-Success Shutdown——————Failure Restart——————-Failure 5. Replace mdm #3 with mdm #1 again, same procedure Shutdown——————Success Restart——————-Success Shutdown via command from Menu——–Success 6. Replace mdm #1 with mdm #3, same procedure Shutdown via Menu command———-Failure 7. Replace mdm #3 with mdm #1 again, same procedure Shutdown——————-Success Shutdown via Menu command———-Success Restart via Menu command———–Success Conclusion: mdm #1, the exit-on-sighup version removed the issue. Thanks for the very nice upgrade and hope these tests help. Dennis Edit by Clem: Thanks Dennis. We’re really close. I know how to fix it, I’d like to understand the cause a little more though. I don’t understand why this only surfaces during the upgrade (although you mentioned you could see it happen in live session.. but I think that’s a different issue), and I don’t understand why it only affects some people/machines and not others. Can I ask you the same as Wolfgang? i.e. reinstall version #3 (the one that came with Serena), enable the Debug mode in the MDM preferences, reboot, ask for a shutdown from the menu (which should fail), and then “grep mainloop_sig_callback /var/log/syslog”. @Clem Yes, I double checked that the debug option in the login window preferences is activated. With trigger and perform shutdown I mean the same, i.e. clicking on the shutdown icon within the main menu 😉 After upgrading, when I connect my Iphone 5s to my computer, it mounts, but doesn’t show anything in it. I was wanting to access the DCIM folder to copy some pictures over to my computer. It was working before I upgraded. My Iphone is running IOS 10.2. I upgraded, but have some problems. After upgrade I get random freezes. It happens mostly when I do sth in file manager or try to change some menu settings (default cinnamon menu). Nothing of this happened before. Any advice maybe? Edit by Clem: Hi, first thing, remove all 3rd party extensions/applets to see if they’re involved in the issue. Rythmbox is not integrated with the panel’s sound icon ? There is no icon shown either, even closed and playing. Not sure if it was like that in 18. Audacious works fine. Edit by Clem: Hi, in Cinnamon it’s integrated with the sound applet. In MATE we provide support via a new package called rhythmbox-plugin-tray-icon. Thanks Clem I finally get upgrade finished. Unfortunately I wish I come back to my 18 Sarah 2 days ago. Here are the thing I uncovered: 1. The panel doesn’t work with the `Window list with app group` applet anymore, it just display but looks like it is a transparent man to all mouse actions including left and right click 2. Still a lot of apt dependency issue. For example my skype is removed and I can’t install it. See Sorry still a few points: 3. the sound icon in the panel disappered 4. the CPU fan keep shouting, Chrome working like jagged, while `top` doesn’t show very high CPU usage except sometimes Xorg and cinnamon eat the CPU but release very quickly Regarding the backup tool crash – it’s desribed here already: Well just uncovered a big one, I have an 8-core i7, previously in the system monitor I can see 8 cores and now I got only 1. `cat /proc/cpuinfo` gives me only one core: root ~ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz stepping : 3 microcode : 0x16 cpu MHz : 2646.937 cache size : 6144 : 4788.74 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: What’s wrong here? newbie to linux, seamless upgrade, everything worked beautifully, you guys are amazing, thank you for this beautiful os, peace Hi Clem, The output that you requested is listed below. If you have any further questions or info needs, please ask. Note: using dpkg to list the packages gives a list of 3400 lines BTW,a year ago I reported a similar problem with the software restore. Ralph ralph@sclero ~ $ mintbackup (mintbackup.py:12189): Gdk-ERROR **: The program ‘mintbackup.py’ received an X Window System error. This probably reflects a bug in the program. The error was ‘BadAlloc (insufficient resources for operation)’. (Details: serial 9686.) ralph@sclero ~ $ uname -a Linux sclero 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ralph@sclero ~ $ Thanks for the painless upgrade! Worked like a charm, only had to put wireless password into Wicd. Merci beaucoup! I don’t know why mint discourage upgrades while every other distros insist users to stay up to date.Won’t it create security issues? Edit by Clem: We don’t. We discourage uneducated updates, i.e. upgrading everything blindly without knowing what is being upgraded and why. It’s all about choice and information for users. In the scope of this particular post, we don’t discourage updates, we’re asking people to wait before they upgrade and wonder whether they want or need to, to read the release notes, the new features page etc.. Look at the comments, 90% of the people are delighted with their upgrade, but that doesn’t mean it’s smooth for everybody. Every bit of new technology brings new regressions, Cinnamon 3.2 for instance doesn’t address security holes, it brings new features and new bugs. A few people are affected by it. We’re chasing memory leaks, keyboard grab issues in the screensaver, shutdown sequence issues in MDM… all in all we’re very happy with 18.1, but that doesn’t mean it’s better for everybody. Another aspect of this upgrade is to separate the new features from the other updates, i.e. it upgrades you to 18.1 without applying the other updates. That’s by design and it’s to separate the two, in an effort to isolate regressions better. If somebody dist-upgrades and suddenly has resume issues, we don’t know where the issue might be because a lot of things changed. If somebody performs this upgrade the way it’s supposed to be, and suddently has resume issues, we know the kernel didn’t change, and so we can focus on the changes we made, for instance in the session manager and the DM, to quickly analyze the cause of the regression. Security is very important, so is stability. They require careful thought and analysis, not blind and general dos and donts. Okay, the CPU core detect issue has been resolved after I changed my GRUD config, specifically the following line: Linux /vmlinuz-4.4.0-53-generic root=/dev/mapper/mint–vg-root ro atkbd.reset=1 i8042.nomux=1 i8042.reset=1 i8042.nopnp=1 i8042.dumbkbd=1 i8042.nopnp noacpi nolapic atkbd.reset Changed to Linux /vmlinuz-4.4.0-53-generic root=/dev/mapper/mint–vg-root ro quiet splash $vt_handoff The latter one copied from my another laptop which is still on 18 Sarah Now next issue is I don’t have audio device anymore. Running `inxi -A` gives out: Resuming in non X mode: xrandr not found. For package install advice run: inxi –recommends.4.0-53-generic Anyone has any idea? Edit by Clem: Hi Gelin, when you had your DNS issue and faced package issues it looks like you removed critical packages.. for instance here, if xrandr is missing, I’m assuming x11-xserver-utils isn’t installed? I would recommend an “apt install xorg x11-xserver-utils –install-recommends”. You’re probably missing many other packages though? Here’s a list of packages included in Mint 18 by default, in case it helps: I have the same problem described in post 154. You can not unstop the screensaver. I have updated LinuxMint 18 to 18.1 Cinnamon 64 bits. Thank you very much. Edit by Clem: Hi Carlos and everyone affected by this issue. So it looks like the screensaver isn’t focused and can’t grab the keyboard? First, can you make sure you’re not using the “Window List with App Grouping” applet? It’s known to cause issues. Second, does this happen only on idle (i.e. when the screensaver is activated by Cinnamon) or also when you lock the screen manually? Third, please kill the screensaver (killall cinnamon-screensaver) and run it in a terminal (cinnamon-screensaver), then reproduce the issue, log back in (either by killing it from console or switching user and logging in via MDM) and check the terminal output for signs of errors. Wish i could say i’m having a smooth run with it but i’m encountering file systems corruptions and black screens with only cursor visible after login. It’s a shame because i like it when it works but i can’t fix this i’ll switch to another distro. Edit by Clem: Check dmesg for I/O errors to see if the disk is fine, hopefully you can clean up the FS if it’s at filesystem level. Hi, first of all congratulations for 18.1 MATE. It is fast and looks great, especially the Mint-Y dark theme. In spite of my love for 17.3 i will give this a go because of the innovations. I hope that in the future Mint will adopt Wayland as well. That would make it even more attractive. I installed PIA from the Debian package and it is really easy to configure. After i subscribed and activated it there is hardly a delay in browsing. So that is also a success. There are a lot more goodies but i also have a -minor- shutdown problem and would like to share this. Before i installed Mint 18.1 i had a dual boot between Mint 17.3 and Windows 8.1. I then installed Mint 18.1 beta on a new partition on the bootdisk. After the install i went back to 17.3 and did a ‘sudo grub-install’ and a ‘sudo update-grub’. From that moment on, using Mint 18.1, the shutdown hung (or was very slow, i do not know because i got impatient and pushed the reset button). After a ‘sudo grub-install’ and a ‘sudo update-grub’ on Mint 18.1 the problem was gone. Inspecting /etc/default/grub did not show any differences. After that I installed Cairo dock which has a Logout applet. When i shutdown the system using that applet the shutdown takes a few minutes. When i shutdown using Quit in the MATE menu it shuts down in a few seconds. I wonder what the difference between the two is. I tried your suggestions in post 138 but the system will not allow me to install older versions. I do not care so much because all in all Mint 18.1 is a beautiful experience. Thanx ! @166 I think, in Cinnamon at least, Rhythmbox needs the rhythmbox-plugins package for panel support. I’d like to know if there are any other recommended packages that are installed default in 18.1 for this software. Edit by Clem: Yes, rhythmbox-plugin-tray-icon The following updates are on their way: – cinnamon 3.2.7: Fix for click/focus issues with “Window List with App Grouping” applet (the applet is still broken but it no longer impacts Cinnamon). Smoother magnifying glass. – cinnamon-screensaver 3.2.10: Fix for cinnamon-screensaver-pam-helper using too much CPU. Fix for cinnamon-screensaver-pam-helper not exiting when unlocking from the session proxy (via MDM). – mdm 2.0.17: Fix for Menu->Shutdown hanging before Xorg is killed (i.e. session terminating but not triggering a shutdown sequence). Hi 🙂 to answer from clem #177 Cinnamon 18.1 32Bit.. seems is a mainly 32bit problem. on all 32bit systems the issue. 64bit no problem. 1. the keyboard works, can input the password and see the dots. but no enter and other works. Only logout button. 2. Window List with App Grouping not install at any systems 3. no error into the terminal after killall > restart > start lock-screen and back via login-screen to the desktop hope that helps.. regards ehtron Edit by Clem: Thanks, I’ll try to reproduce it on 32-bit. Is this in a VM or a real machine? Hi Clem and Team ! Still i wait answer my privacy question. Can you tell how or what to do with privacy ? Better privacy than windows 7,8,10 ? Same time i think what is LM18 privacy. I want keep full privacy settings, how ? Thanks. And happy christmas for everyone ! Edit by Clem: Hi, I’m not sure what you mean. Privacy in Cinnamon is where you tell Cinnamon to remember your recently opened documents or not. Thanks fast answer ! I mean, microsoft spying everything. This on biggest reason why i chance LM18. I try ask do this LM same than microsoft (spy, collect data etc etc) ? If not, its best of course. If do, i want to know where is settings where i can choose settings. Thanks ! , again 🙂 Edit by Clem: We don’t collect data from users, we don’t even “ring home” to know how many people use Linux Mint. This is your computer, not ours, that’s an important part of our philosophy. The open-source nature of the OS also means people can check the source code if there’s ever any doubt. Ref: 120 & 172 I ran “python -dv /usr/lib/linuxmint/mintbackup/mintbackup.py” in a shell to see if I could get any more info. Most of the output was loading this and that, so I have included the last bit which is essentially the same as before. ….. # /usr/lib/python2.7/dist-packages/gi/overrides/Gio.pyc matches /usr/lib/python2.7/dist-packages/gi/overrides/Gio.py import gi.overrides.Gio # precompiled from /usr/lib/python2.7/dist-packages/gi/overrides/Gio.pyc # /usr/lib/python2.7/dist-packages/gi/overrides/Gdk.pyc matches /usr/lib/python2.7/dist-packages/gi/overrides/Gdk.py import gi.overrides.Gdk # precompiled from /usr/lib/python2.7/dist-packages/gi/overrides/Gdk.pyc # /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.pyc matches /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py import gi.overrides.Gtk # precompiled from /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.pyc Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged. (mintbackup.py:18910): Gdk-ERROR **: The program ‘mintbackup.py’ received an X Window System error. This probably reflects a bug in the program. The error was ‘BadAlloc (insufficient resources for operation)’. (Details: serial 14165) BTW, running “dpkg –get-selections | grep -v deinstall > tmplist” on my computer produces a list that has 3234 lines. Edit by Clem: Cool, I’m able to reproduce the issue in a 32-bit VM. We have a fix coming for the inability to unlock the cinnamon screensaver in 32-bit. Hi 🙂 @clem #182 2. machines 32 bit nativ and also in 32 bit Virtualbox. 64bit host, 32bit guest. Edit by Clem: Thanks, we’ve got a fix coming for it. Clem, thanks ! I cant even words tell, how happy i am at this moment. Every day coming only better and better news. Good philosophy, keep that 🙂 Your team will chance the world , 10 years and nobody dosnt know what was microsoft 😀 Perfect ! mintbackup 2.2.6 fixes the crash in 32-bit. The automatic connection is strange. – On one of my computer, if I activate the automatic connection, AND the connexion after 10 sec. (I would like to have less but its impossible), it works but I need to wait 10 sec. after the connection. If I desactivate the connexion after 10 seconds, the automatic connection doesn’t work anymore, I must connect manually ! – On another computer, the automatic connection works when I activate the 10 sec OR when I activate it. Clem, This is the Syslog you requested. I reinstalled mdm #3, the download you provided for the original mdm. I booted into it, set the Debug flag in mdmsetup, and then shutdown via Menu which was successful. I booted back up. This time results were a little inconsistent so I did a series of shutdowns: Shutdown via Menu——————Success after moderate pause Shutdown via Panel app—————-Failure Shutdown via Menu——————Success Shutdown via Menu——————Failure I shutdown manually, booted back up and then did: grep mainloop_sig_callback /var/log/syslog This is the syslog I am submitting: Hope it helps! Dennis Problem!!! I copy/pasted the contents of syslog into this message but your firewall denied submission. Block reason: Exploit attempt denied by virtual patching. Perhaps I don’t know how to submit the syslog properly? Dennis Edit by Clem: Hi Dennis, you need to use a pastebin service. @Clem Hi, thanks for answer. I found problem – Nvidia Drivers. I downgraded them from 367 to 340 and works like charm 🙂 cinnamon-screensaver 3.2.12 fixes the inability to unlock the screensaver in 32-bit. when KDE ? Edit by Clem: Probably in January. KDEplasma 5.8 already in Kubuntu backports. No problem. Why wait until January? Edit by Clem: Normally for testing, for instance Xfce is almost ready but it needs to go through QA. But what’s happening here also is that we’re facing a circular dependency issue preventing the build of the ISO and then of course there’s Christmas getting in the way. FUD is being spread: Edit by Clem: Journalists should ask questions before posting opinions. I can’t think of a single security fix in the 18.1 upgrade, it’s all about new features. The security updates are exactly the same on 18 and 18.1 and you get them whether you upgrade or not. They’re completely independent and have nothing to do with this upgrade. @Clem thanks for your advice, I tried with `apt install xorg x11-xserver-utils –install-recommends` and got the error: E: Unable to locate package –install-recommends Then I tried `apt install xorg x11-xerver-utils`, and it installed without error However I still don’t have sound. In my sound setting, `Output profile` is disabled, `Input` tab says “No input sources are current available”. Any idea? Edit by Clem: Hi Gelin, it’s hard to see on the blog, but –install-recommends has to “-” signs at the beginning. Check the packages containing pulse in the list, they’re related to pulseaudio, the default sound system. @Clem, the filesystem.manifest is huge, is there an easy way to allow me to install them as whole ? @Clem, thanks for your quick response. Now I got error: root / var log apt install pulse: pulseaudio : Depends: libpulse0 (= 1:8.0-0ubuntu2) but 1:8.0-0ubuntu3.1 is to be installed E: Unable to correct problems, you have held broken packages. Edit by Clem: Hi Gelin, please use the forums and IRC to seek help for this. Also, I forgot to mention, you can see the history of APT events in /var/log/apt/history.log. Check in there to find the details about which packages you removed and try to reinstall them. Attend to the broken packages first. Is a clean install possible through the wizard? Upgrade from 18 was okay. Had some issues when I switched to NVIDIA driver. My main issue is that my browsers keep crashing, especially after the screen save has come on Typed into Mint Searchbox “login name.” Too much of NADA, so Googled it. You may want to include the solution for finding out the login name in your upgrade information. Chose to go with the Ubuntu site and scroll past the stonewallers and found the simplest reply who said to go the black screen and type in… whoami Now I know. “rhythmbox-plugin-tray-icon” Yes perfect now ! It probably should be installed with rhythmbox by default, otherwise there is no controls once closed and running. What a great team here ! It not going to take very long before 18.1 is THE Linux standard in the world. Superbe travail Clement. Vous devez être vraiment fier de ce que votre équipe fait. Linux Mint >> Microsoft. awesome job LM staff.. finally i can upgrade in one linux distro..and nothing crash… writing in my LM 18.1 Mate edition, using the mint update…happy christmas to me mint 18.1 caused my panel to lockup all the time. the latest update for cinnamon that came down tonight fixed the issue. Edit by Clem: It does, thanks for your feedback. (please remove comment n° 190) The automatic connection to MATE is strange. – On one of my computer, if I activate the automatic connection, AND the connexion after 10 sec. (I would like to have less but it’s impossible), it works but I need to wait 10 seconds before the automatic connection. If I desactivate the connection after 10 seconds, the automatic connection doesn’t work anymore, I must connect manually ! – On another computer, the automatic connection works when I activate the 10 seconds OR when I activate it. Hi, i got an MDM-update yesterday and since then my shutdown-problems seem gone. Thanx very much ! Edit by Clem: Yes, thanks for your feedback. Thank you very much!! It updated mint-18.1 from 18(mate) Take time 6 miniute! miracle time!!!…^^ Clem, This refers to #191: I am submitting my mdm debug syslog via Github – first time use so don’t know if you can read it properly. This is the link: Hope this helps, Dennis Everything went as smooth as silk with the Upgrade to 18.1. I read that article on cio.com regarding security and find it hard to believe this journalist had no clue about the… Your system is up to date check with Mint. Thanks Mint team. Merry Christmas Can I upgrade from 17.3 to 18 or 18.1? Serena looks great. These new icons are awesome. It brings contemporary breathe to the good old design of the Cinnamon. @211 @Derek You can upgrade to 18 from 17.3 (), and then to 18.1. @209 Dennis Laskowski and everybody: If you want to use the Gist service on Github, there’s a third party application called Gisto, available at I’ve just downloaded it and it’s simple and it seems to work well. You don’t need to run the setup script if you don’t want to. You can just run the ‘gisto’ shell script in the ‘gisto’ folder. @Clem and the Linux Mint team: Thank you for LM 18.1 and all your hard work. Please take the next four days off and relax and have a great Christmas. Edit by Clem: You can actually use the pastebin command in Mint, it uses gist. I just upgraded my LM18 MATE laptop and desktop using the upgrade option/tool. It took ~15 minutes (12 of which were downloading the new pieces). It seems to have worked flawlessly both times. 🙂 Hi, I’m not exactly sure about how the lifecycles for Linux Mint releases work, but I noticed that all 17.x versions seem to be supported until 2019. Does that mean that if there is a bug or security vulnerability affecting all versions of 17.x that each version (17.0, 17.1, 17.2, 17.3) would have corresponding updates for the broken packages? Edit by Clem: Yes. Easiest upgrade I’ve ever done. Went without a hitch. Thank you so much. Just got back here and read about the shutdown issue for some of the posters here. I have updated the mdm package just released. Decided to test it out myself. I have no problem shutting down. The problem, however, is that, when I tried to restart the system, it took more than a minute to shutdown where previously it only took a few seconds (using an SSD here). There is no problem in booting up; the time taken is just a few seconds as expected. I also realized that the shutdown only slowed down when I have logged on to Cinnamon at least once. If I did not log on to Cinnamon and straight away restart from MDM after the system came up, there is no slow down. But once I logged on to Cinnamon once (and even if I log out back to MDM) the shutdown process will slow down. This is not a major issue for me. Please enjoy your holidays. But will appreciate it if you can look into this later. Thanks. I wish to revert to mint 18 Sarah from current 18.1. Mouse pointer (plain arrow head, even though enlarged) disappears as my system comes out of rest mode. i have to reboot to get back the mouse pointer. How to uninstall 18.1? Edit by Clem: This is fixed in afaik. Make sure to update xserver-xorg-video-intel if you haven’t already done so. @sauerkraut (219) – Although I experienced the same thing in Mint 18 MATE, it appears to have disappeared in Mint 18.1. If Clem’s advice doesn’t work, when I had the problem I found that by clicking on the Menu button, then Control Centre, then moving the cursor around a bit, I would get the arrow to return with no need to reboot. Still no upgrade option from 18 > 18.1 Cinnamon. Do I have a problem or have not all the mirrors been brought up to date? I have installed everything that update manager has to offer. Thanks I have been using Mint 18.1 since it was release and having a dual boot, 17.3 and 18.1, I can’t see any advantage at staying with 17.3. It is that good. Now a bit of bitching, – the french translation is sometime misleading. Like “Paramètres/espaces de travail/Paramètres” – “Permettre le déplacement parmi les espaces de travail” devrait être plutot “Déplacement entre les espaces de travail en boucle” OR more annoying is after sending a job to the printer, the popup window label is “Erreur d’imprimante” while there is none. It should be probably “Impression du document” with the status presented underneath. – some of the options are difficult to understand and a bubble popup will be enormously helpful. – I haven’t seen where to change Cinnamon shortcuts. Most of them are unknown but some very useful. Merry Christmas to every one and to my utmost respected software development team ; The Linux Mint Team. Re my comment 221 above. I was in my Mint KDE OS and did not realise. I am now in my Mint Cinnamon partition and the upgrade to 18.1 is going fine. Merry Christmas to all. Thanks to all the team for the hard work, and happy Christmas to all of you here! Merry Christmas! A default keyboard shortcut to bring up the Linux Mint Menu is the Windows key. Please add to the default Linux Mint installation, as in Windows, Ctrl+Esc performs the same function as the Windows key. Ctrl+Esc also works on Windows if the Windows key is available. It would be appreciated if this would also work in Linux Mint by default. Thank you Merry Christmas! Please enable Linux Mint to gracefully open multiple programs. Steps to reproduce: 1. Open multiple programs from the panel (eg Firefox, Thunderbird, …) by clicking on them. 2. Say Firefox opens first. Start typing a URL. 3. Suddenly Thunderbird opens and the focus changes to Thunderbird and some of the key-presses which were meant for Firefox are caught by Thunderbird. This can cause a lot of problems such as deleting and archiving email, etc. Solution is that if a program is currently being used, any other program that opens: a) would not get the focus; and b) would open behind the program that is currently being used; and c) panel entry could flash 3 times when it is fully opened (optional) If you have a Wind* box, please have a look at how it is implemented there. Thank you Merry Christmas to the Mint development team and all the testers throughout the world. A special thank you to Clem for the individual time and attention he gives to us OS users! Thank you! when linux mint 18.1 kde will come.? Hi LM folks, hi Clem! I’ve been running LM for years now, first on a Lenovo Thinkpad E520, now on a Thinkpad W530, it was my guide away from Windows, and now I am running your latest LM 18.1 cinnamon without any problems! The upgrade from LM 18 took about 6 minutes – and done! Unbelievable!! Thanks for this outstanding distribution of a really stable and, what’s more, extremely well supported operationg system! Clem, as Bill S. said above, the special attention your users get from you is fantastic! Thank you all! Jörg first question : i have at this moment LM18 cinnamon, can i update 18.1 MATE easy way ? Or can i only update 18.1 cinnamon(no mate). Same what i have now. Second question is : If 18.1 not working and i want go back LM18 cinnamon(how that happend?), what i need to do ? Install USB LM18cinnamon and start again clean desk ? If i before understand right, updating will NOT deleting files but how happend back old version cancel “updating” , i mean return & cancel update and go back old version 18 cinnamon. Thanks ! And HAPPY NEW YEAR 2017 for all !! Upgraded from Mint 18 without a hitch. Many thanks, Mint Team. First of I’m not a complete idiot, just a Linux idiot so I may not be able to chase down any problems for feedback. In any case, I have Cinnamon 17.3 and 18.1 32 Bit installed on two partitions and Windows XP on a third on my old laptop. I can’t get the clock to use a 12 hour format in 18.1 even though I turned off the 24 hour option in [Time & Date] but it makes no difference. The second issue, and I don’t really expect a solution but who knows…, in 17.3 and 18.1 I can connect to my home WiFi with no problems but yesterday I took my laptop to my son’s home to demo Linux for him and found that 18.1, which I booted into first refused to connect to his ATT WiFi so I booted into Windows XP and it worked fine. So I then booted into 17.3 and again it connected just fine. Rebooted into 18.1 and it still wouldn’t connect… Sorta proved my hardware should not be a problem but could only wonder why 17.3 would work but nor 18.1. Thanks for reading this, Ken Okay, got the time date format working as desired 🙂 Dang, spoke too soon, it’s still in 24 hour mode. @226 Óvári, I just tried the ‘Steps to reproduce’ you suggested and Thunderbird did not grab focus while I was typing in the Firefox URL bar. In fact, the behaviour was exactly as you suggest in ‘Solution ….”, including flashing the panel icon 3 times. I’m on Linux Mint 18.1 MATE 64 bit. What are you on? Hello. Merry Christmas to all Nations and people. “Yousten we’ve got a problem!” Espacialy with the newest kernel 4.8.0-xx. This kernel doesn’t implement usb sondspeaker system as before the older kernel 4.4.0-xx in the soundsystem of linux mint. the wiki of ubuntu users germany tells me, that sometimes a kernel makes problems with usb soundsystems. After downgrade the kernel, the USB media sounds very well again. I think it’s not an original case of Upgrading Linux Mint Cinnamon 18.0 to 18.1, but maybe the Linux Mint team knows an answer in the future. Thanks for reading this. Best regards. Frank One small thing I would like to see is the option to get rid of the “Do you want to switch to a local mirror” message in the system update area. I am happy with the mirror I am using and don’t need a constant reminder. Hi, virtual box does not work kernal 4.4.o-57 kernal Downloads do not added to box to work. Also current update fail to load up using update Icon.Thanks @kyuss I too hate that default go to Update Manager/edit/preferences/options… uncheck the box on the bottom line @235 Ahab Greybeard, Thank you for your response. I’m on Linux Mint 18.1 Cinnamon 64 bit. Thanks. Everything seems to be OK. Kernel: 4.4.0-53-generic x86_64 (64 bit gcc: 5.4.0) Desktop: Cinnamon 3.2.7 (Gtk 3.18.9-1ubuntu3.1) dm: mdm Distro: Linux Mint 18.1 Serena CPU: Quad core AMD Phenom 9600B (-MCP-) cache: 2048 KB Instead of “Window List with App Grouping” I use “Icing Task Manager 4.1.1” – @peter Thanks much for the tip. I found the option to “don’t suggest to switch to a local mirror” in preferences. @232 Ken, WiFi issue could be due to the version of wpa installed. If I recall it correctly, version 2.1 works without any glitches. Search for that particular version and install it and update us here !! Hi Matthew, Okay so I guess I am an idiot 🙂 I did a search on wpa 2.1 and got a reference to ubuntu7.3: wpa package 18.1 has just been released wouldn’t it contain the latest code? Or is there an issue with the latest package and 2.1 would be going backwards to get something that might work on his WiFi connection. Especially since 17.3 and 18.1 works for me at home no idea why 18.1 won’t work at my son’s. I will try 17.3 on my next visit, he’s 100 miles from me and has some terrible work hours so it may be a while before this happens??? Also I have only installed the complete system and never installed individual packages, only run the updates when available. Thanks for the suggestion even though I have no idea what was installed in the 18.1 download or how to find and install another “package”. Is there a different version in 18.1 vs 17.3. because I did connect with 17.3 and XP but not 18.1. I’ve read some of the posts here and see folks happy staying with 17.3 for now and if 17.3 works for my son he could work with that. @Ken, v2.7 comes with mint 18.1 and 18 , It has issues with WPA-Enterprise authentication. I use mint 18 at work, and downgraded the pkg to v2.1 to be able to connect to office network. The simple WPA-Personal auth works fine with any version of the pkg. You might find “wpasupplicant_2.1-0ubuntu7.3_amd64.deb” in your search result. Hi Matthew, thanks for the link. It may have gotten me to what I’ll need. From the link I take it that file is for an AMD 64bit CPU so I clicked on ” wpasupplicant” in the path near the top of the page which gave me other options and I think his desktop has an Intel Pentium or Celeron CPU so maybe this is what I need: ( wpasupplicant_2.1-0ubuntu1.4_i386.deb ) The site notes: “it is strongly suggested to use a package manager like aptitude or synaptic to download and install packages, instead of doing so manually via this website.” Since I’ll be in Mint I take it that I’ll be doing it manually… Okay, as it may not happen for a while and he may not like Linux I’ll look around a bit more and then just have to wait until we can get together again. In the meantime I may install the file in my 18.1 system and see if it still works for me… Since I just installed 18.1 a few days ago I won’t loose any important files if I have to end up reinstalling v2.7 or worst case reinstall 18.1 ;-/ Thanks for your help Matthew, I will drop by when I get further along with this adventure! When leaves Mint 18.1 Serena KDE? Clem, Even after the recent package updates, I’m still having problems with the screensaver freezing up after resuming from suspend, and occasionally CPU usage spiking near 100% on all four CPU cores after a suspend. Is there a way to revert back to Linux Mint 18 without reinstalling, until these problems can be resolved? Smooth update. Slow boot and login remains. So I have to keep looking for a solution here. Boot: 1’09” Login: 38″ Upgraded in a short period of time. Smooth and clean it seems. Thank you for great work. @Matthew, I have the 2.1 package on my laptop but when I told it to install I got an Error message saying a later version was already installed. Of course the question is, how do I get it to install, hoping that is not a monumental task! Thanks, Ken @Ken, sudo dpkg -r wpasupplicant sudo dpkg -i wpasupplicant_2.1-0ubuntu7.3_amd64.deb This is all you need, but please make sure to lock the version of this pkg in synaptic package manager and also in update manager to keep it away from getting upgraded. @Matthew, LOL, ah… yea… right… I have no idea what you just said. I appreciate your knowledge, however for me I’m coming from being a Windows user since 1997 and used to write code in HP Basic for automated testing but when it comes to Linux, as I said I’m an idiot, or more correctly I am for all practical purposes totally ignorant. I have grown to loath Microsoft and knowing Win 7 support will end in 2020 I am in the process of getting used to using Linux and it’s getting better with time. I’ve tried several times in the past but it’s another language which I don’t speak and a reason I believe it will keep many folks from using it. I don’t push my son to switch because I’ve told him I would not be able to give him much help if he had a problem. Also I wonder about the amd64 reference in “sudo dpkg -i wpasupplicant_2.1-0ubuntu7.3_amd64.deb” because I’m on an Intel machine plus my old laptop and his computer are both 32 bit machines so the “amd64” doesn’t connect in my brain, a puzzle to which I have no answer. Don’t recall if I’ve ever seen that in the Windows files, I’ll take a look to see if there is any reference to it. Like not knowing exactly what a sudo dpkg is all about I don’t know what a “synaptic package manager” is… Okay, found it and it looks like I have 2.4-0ubuntu6 installed and I found under [Package] menu a [Lock Version] option. Also right clicking on the package name I see the wpasupplicant is marked so I’m guessing that with a right click I could select the “Mark for Removal” option but I have no idea where to go from there… and I’m not getting any clues in the Update Manager how I would keep it from being upgraded. I’ll just set him up with 17.3 to try out since that should connect on his WiFi. Regards, Ken (Time to rest my eyes.) Hello South America. We love you and lets all try and get along. Thanks from Canada. Merci Clement Happy New Year. I’m out of here. @Matthew About your reply to Ken @251… those are terminal commands, correct? I’m not going to go into that here, but could you point both him and I to a easy-to-understand reference for those sorts of things? Or maybe around this blog somewhere? If this is an inappropriate question for here, I apologize. @246 Ken Hi Ken, The ‘amd64’ is just an indicator to say that the code is for the 64-bit architecture. This is for both Intel and AMD CPUs. It’s confusing at first. x86, i386 mean 32-bit systems. x86_64, amd64 mean 64-bit systems. You mentioned not losing important files, after reinstall, because you’d only just installed a version of LM. It’s a very good idea to have your personal data in a separate partition that gets mounted into your home folder. You do this in the installation stage by choosing the Manual partitioning option instead of letting the installer do its fully automatic thing. Or, you can set it up by editing /etc/fstab after installation. That way, you have a definite and separate storage location for personal data that can be quickly and easily copied onto another drive by using the Gparted utility. Download and burn a copy of the GParted Live CD, it’s a great tool and comes in 32-bit and 64-bit versions. Details and advice about all this and much more can be found in the many hundreds (if not thousands) of Linux forums on the internet. Concentrate on the Mint and Ubuntu forums for obvious reasons. Here’s one I found after 3 seconds of Google searching: “Okay so I guess I am an idiot …” No you’re not. There’s just lots of things that you don’t know yet. Give it time and you’ll get more and more capable, as we all do. Good Luck with everything and keep on trying and learning. Ahab Hi Ahab, Thanks for the encouragement 🙂 At present I’m not concerned with loosing personal files, just my mind! I copied some music and photo items so I can get a feel for how things work, just basic things I need to know how to do. I will keep Gparted in mind for down the road, another thing to learn. My 8510GZ laptop is ~12 years old, it has an Intel Pentium m 740 processor which has 32 bit architecture and thats why the file given me showing amd64 has me confused. I did find one for i386 which I thought might be what I may need to use at my son’s. Thanks for the link to the forum, I’ll check it out. I started here because its an issue of 18.1 not connecting to my son’s WiFi but 17.3 works okay. At my home both Mint version connect. Wishing one and all a HAPPY NEW YEAR! Ken @ Clem: RE 145 & 146 I installed 17.3 on an external drive connected to the problematic machine (Acer Z1-601) as well as installing the 19. whatever kernel in the existing 18.1 on that machine’s HDD. The results were the same. When watching videos or movies, via the browsers, the program eventually will freeze without warning. The time frame is arbitrary, it may run only a few minutes or for more than an hour, but eventually it will freeze. No keyboard, mouse, or anything. The only solution is to power off the machine. So, I thought I’d passed that info on to you just in case… Thanks. I upgraded to 18.1. After the upgrade, the menu doesn’t work properly. When I move my mouse over the All Applications, Accessories, Administration and Preferences categories the list of applications in each category is invisible, ie I can’t see the name and icon of the application when I move the mouse over it, but I can see the name and explanatory text at the bottom. The other categories work as usual. I don’t think I have any 3rd party applets. Thanks I too did not have the option to update to 18.1 I had just installed mint, thought I installed Cinn, but checked /etc/linuxmint/info and I am on 18.1 already. @255 John M Hi John, If something like ‘sudo’ is obviously a terminal command, just type ‘man sudo’ as a terminal command and you will be shown the manual entry for that command. These manual entries can be quite terse and seem very cryptic at first, especially with all the various option listings. For a beginner, a good thing to do is type “linux sudo” or “linux mint sudo” into the Google search engine. You’ll often get dozens of results. There are many good explanatory and tutorial articles and posts out there so don’t just read one or limit yourself to the Mint sites. The Ubuntu sites are also very useful and usually appropriate. I learned everything I know about Linux and Mint by using Google Search and reading the tutorial and forum entries that I found. In general, if you have a problem with Mint, somebody somewhere has also had the problem and made a post about it on a Mint or Ubuntu forum. There will then be answers and advice about how to deal with the problem. Hi Clem and team Really like the new feature that lets me control the audio player even when the screensaver has locked the display, thank you. I built 18.1 Beta Cinnamon on a test PC and all was well, so upgraded a desktop and a laptop from 18 to 18.1 a couple of weeks back. Everything seemed fine, but today I realise that if in Nemo I right-click on a jpg, pdf or epub file and then click the Properties item in that menu, I get a dialog box that is missing the top row of buttons that normally are there (Basic, Permissions, Open with, Emblems, Digests). This prevents me changing the file permissions. I have LM18.1 64 bit, Cinnamon 3.2.7, Linux kernel 4.4.0-38, and also tried upgrading to -57 without any effect on this problem. Its not a crisis – if necessary I can rebuild from scratch – but I wonder if anyone else experienced this, and if there are any tips for resolving it. a PS to my post #262 re Properties in Nemo. The test PC I built directly from Beta does not have this problem, just the 2 upgraded machines. @259 Tracey, try right-click on Menu –> Preferences –> Theme. Check if Use custom colors is checked. Uncheck it if it is. Maybe this will help…. Updated Mint 18.1 Cinnamon 64 bit with the latest dot updates. I have what appears to be a different screen saver issue or at least different symptoms. When I try and wake the screen saver up, I don’t get a password box. However I can still type my password and press enter and unlock the screen. It will take 5-15 seconds to display the desktop and then another 5-15 seconds for the desktop to become responsive. Sometimes I see the CPU at 25%. Looks nice other than the same old pesky issues. Still having issues with icons appearing on both displays without having to reset the view. System clock still getting out of sync after repeated attempts to update. Still can’t get fill manager windows to remember last location. For my home and work PC, I’m fine with fiddling with this stuff, although, these seem to be fundamental flaws. On the other hand, I maintain another PC that will be handled by multiple users. Seriously considering ditching Cinnamon for XFCE, that is, if these problems don’t persist there also, and don’t somehow introduce some other buggy behavior that any normal user would have a problem with. Don’t get me wrong, I love Mint, and I love the distance that Cinnamon has come, but . . . . I simply can’t remain mindlessly silent about what’s still hanging out there. All that said, I do extend a hearty thanks to Clem and the Mint team. I know you are not ignoring these issues just to ignore them. There must be a reason. Thanks again. Kinda new to this. I was running Linux Mint 18.0 Cinnamon and was able to connect my Iphone 5s (running IOS 10.2) to my computer and access the DCIM folder to store photos on my computer. I upgraded my computer to Linux Mint 18.1 Cinnamon, and now when I connect my Iphone , it mounts. When I open the file folder, it says that it contains digital media, but it says there are 0 files. It is not seeing any file folders. Can anyone help?? Has the kernel been patched in 18.1 to address the recent CVE-2016-8655, CVE-2016-6480, and CVE-2016-6828 vulnerabilities? Edit by Clem: Yes. Although kernels don’t really get “patched”, you’re constantly getting new kernels backports. As of 4.4.0-53, these are fixed. You can identify which kernel version in particular brings which fix by looking at changelogs and the Ubuntu CVE tracker. That information is available from the Update Manager -> View -> Linux Kernels window. @Clem #268 Are they fixed for kernels before 4.4.0-53? Linux Mint’s Update Manager says that 4.4.0-53 is a Level 5 and not installed by default. Should I advise people to upgrade the Level 5 kernel package? There is also another Level 5 upgrade: Linux kernal 4.4.0-57.78 Should this also be upgraded? Sorry for all the questions, just a bit of confusion on my behalf. Thank you Edit by Clem: It’s ok. -53 is the kernel shipping with the stable release. Ideally, all upgrade should be considered and none of them blindly. @Clem #268 Kernel 4.4.0-31 was installed. Should this kernel be removed once kernel 4.4.0-53 is active? Or should kernal 4.8.0-32 be installed as it is newer? Thank you Edit by Clem: 4.8.0 is a new series and it’s not ready just yet (many drivers don’t work with it on the Xenial base just yet… virtualbox, broadcom etc..). Also 4.8.0 isn’t an LTS kernel, it won’t be supported for long, whereas 4.4.0 will continue to be. With that in mind, unless there’s support for some hardware you need in 4.8.0, I would recommend to stick to 4.4.0. All security fixes are sent towards both series right now, but it’s only a matter of time before that stops for 4.8.0. @264 Poloi, that option isn’t there in the Themes dialog. I forgot to mention in my original post that I’m using Linux Mint Cinnamon in Virtualbox on a Window 7 machine. I have Linux Mint MATE installed in another virtual machine, and that’s working fine. Another problem which appeared after the upgrade is when Linux Mint Cinnamon starts up, the desktop background doesn’t display correctly. When I do Alt + F2 and r, the virtual machine crashes with an error message, then I try again, the same happens, and I try again until the machine doesn’t crash. I have had a long-standing problem with shutting down from the menu, where the virtual machine crashes with an error message. I’m using Virtualbox’s Close command to shut down the machine due to this problem. It seems that Linux Mint Cinnamon doesn’t work properly in Virtualbox. I think that the developers should put much more effort into testing it in Virtualbox, as it creates a bad first impression for newbies if it isn’t working properly. As I said Linux Mint MATE doesn’t have any problems in Virtualbox – it’s Linux Mint Cinnamon that needs to be looked at. Just did a fresh installation of Cinnamon 18.1 on my main system alongside Kde 17.3. Very smooth installation. Now after adding all the software, some problems surface which are to be expected. – sadly the Samsung SSD (Evo 840) freeze problem is still there even with recent kernels.(not Mint fault). It appears on concurrent disk access. I had to disable ncq (libdata.force=noncq) which by the way work fine on my OCZ ssd. After that it was ok. – The Nvidia driver doesn’t work with kernel 4.8 (immediate cinnamon crash). – The screensaver freeze the computer after a while. Need reboot. Quite annoying. – The program “Synapse” crash on starting (segmentation fault). Reinstall has no effect. It was fine after the OS installation though. Kupfer works ok. – Clementine is still graphically corrupted. – Steam will not start after installation. It had to start through the patch script (export LD_PRELOAD=libstdc++ bla bla). – sometime strange errors on some files copy. “There was some errors, it need root password to process bla bla”. Was ok after giving it. I am checking out everything but beside those, it run fine and seems to be snappier then my Kde. I’ll have to wait a bit for some fixes before it become my main system. Kde has been like a rock so far. 18.1 is very close now. Happy New Year (Looks like Clement is still at work sitting with his new baby) Edit by Clem: Happy new year 🙂 Many drivers won’t work with 4.8 yet. I’m interested in knowing more about the screensaver.. what version is it? how do we reproduce the issue? Steam should work out of the box, at least the version in the repositories. Hi there 18.1 update went through without problems, but resuming from standby or hibernation now doesn’t require password!! Worked correctly in 18.0. I’ve checked as much as I can, but can’t find where this might be a user setting. Please advise on this, as it’s potentially a serious security risk. Edit by clem: This is part of QA so it should work in the version shipped with 18.1. I must assume a regression in some of the updates then, can you give me the version of cinnamon-screensaver? Also, can you kill its process, run it from terminal (cinnamon-screensaver –debug), then hibernate, resume and see if you see any errors? There’s a new version coming up, but I’m not sure about the cause of your issue so I can’t say for sure whether it addresses it or not. I assume you’re in 64-bit? You didn’t specify. I understand that XFCE 18.0 to 18.1 upgrade will be available soon. What about XFCE upgrade from 17.3 to 18 ? Is it supported somehow ? Ah ok, found this: #272 Clem, I understand for the 4.8 kernel.It’s quite a jump from 4.4. As for the freezing, I suspect the problems are not with Mint but with the Samsung Evo ssd. It still freeze on various actions. Mint 18.1 and the screensaver works fine on my other computer with a Sandisk ssd (no ahci though). I will investigate a bit more about the Evo. I couldn’t get Steam to works out the box on both machines though. It simply wouldn’t launch. Only with the script. Tx #273 Thanks for reply Clem. Sorry not to mention relevant details – it is indeed Cinnammon 64-bit. Running ‘cinnamon-screensaver-command –version’ gives ‘cinnamon-screensaver 3.2.12’. Running ‘cinnamon-screensaver-command –activate’ brings up the screensaver image, but doesn’t lock the machine. I can’t find a -debug option in ‘cinnamon-screensaver-command’, and if I kill the process, no ‘cinnamon-screensaver-command’ options work, reporting ‘Can’t connect to screensaver!’.Probably what you’d expect? Anyway, this may now be ‘fixed’ – I changed from the default screensaver, which is all I ever use, to one of the xscreensavers, and hibernation resumed to the lock screen. I changed back to the default screensaver, and hibernation still resumed correctly. I’ll try a few reboots, suspends and hibernations when time allows to check that this behaviour is now permanent. V Clem and everybody, Happy New Mint Year 🙂 Edit by Clem: Thanks Petrus, Happy New Year to you and your family. Re my post @271, I tried shutting down Linux Mint Cinnamon from the menu – and it didn’t crash! I think that problem may be solved, fingers crossed. Linux Mint 18.1 Cinnamon 64-bit with 2 monitors does not show the first 5 files/folders on the Desktop after restart; however, it shows all the files when using Nemo 3.2.2. Using 2 monitors where the primary monitor is smaller as shown in: Files and folders are shown when created on the Desktop; however, once the computer is shut down, and then the computer is turned on the files are not showing on the Desktop. An explanation could be that the first 5 files/folders not shown could be displayed to the left of the primary monitor as the secondary monitor shows more to the left than the primary monitor. Can you reproduce this? Hopefully it can be fixed too. Please advise if you need more information. Thank you hello, iam using linux mint 18 but donesn’get any upgrade notification in my update manager @281 sultaan Did you follow all the instructions at the top of this blog, after it says ‘Enjoy’? Does the Update Manager offer you any updates at all? Were you offered mintupdate and mint-upgrade-info? @272 to be more precise (last item) : I get this error once in a while while copying files to another folder (ntfs, mounted drive) “a problem was detected in vignettes cache. To fix it require administrative privilege”. (approximation translation from french). Giving the password fix it. Tx Edit by Clem: Hi Marlenejo, it’s actually a feature in Nemo. It detects when the thumbnails aren’t up to date and when permissions prevent from regenerating them. I’ve seen it popup often lately though and it probably needs some tweaking, it’s definitely a bit sensitive at the moment. @265 Looks like the new screen saver fixed my problem. Edit by Clem: Ok that’s good, I wasn’t sure this was the same problem. Everything seems to go fine except that Cinnamon 3.2.7 packaged with 18.1 breaks the “Windows 10 Light” theme. You’ll either need to choose a new theme if you have that, or else downgrade back to the Cinnamon 3.0.7 that came with 18.0. When Upgrading from 18.0 to 18.1: Failed to fetch Could not connect to packages.linuxmint.com:80 (208.77.20.11), connection timed out [IP: 208.77.20.11 80]Some index files failed to download. They have been ignored, or old ones used instead. Just upgraded to 18.1 Serena. Smooth upgrade. Although only a few hours old no problems or regressions seen yet. Took a while but that’s because I have VERY crappy internet service here in rural North Central Florida, USA. 512K on a good day and that’s not today. Fastest I saw was 250k…mostly 110k. Download took about 20-30 minutes. Well worth it though so far. Great job, Clem and Team Mint. Just wish you guys had the resources to do a rolling release of Linux Mint. LTS releases get so stale after a while. Perhaps a yearly release in April as that would correspond to the usual LTS release schedule by Ubuntu but would at least capture a year later update in April as well. I think that would be a good compromise between the 6 month update schedule and the 2 year LTS schedule. 18.1 Cinnamon is a bit slow in terms of response when I want to open a folder. What could be the cause? My install of 17.1 can’t access ethernet on my new motherboard. Can I do an update from the installation disk? @ 281 That’s because you are the SULTAAN OF SWING ? Sorry Clem I could not resist the Men at Work reference. Great work guys. Merci linux mint Is one of the best distribution How to configure default screensaver so that it uses custom pictures, text or screen capture? I cant find it in cinnamon. I don’t this yet in update manager, and I updated all the packages, etc. lsb_release -irc Distributor ID: LinuxMint Release: 18 Codename: sarah I’d leave it alone, but I do have some hanging issues, and have to restart cinnamon from time to time in addition to rebooting more than I used to with 17. I love, really LOVE 18. Unfortunately, I don’t have enough Linux experience to recover if the upgrade goes sideways. How would one go about recovering from a blown upgrade? I want the accelerometer functionality on my ultrabook, and the newer kernel is supposed to support my wireless card more fully. I’ve got this machine set up just the way I like it and don’t want to rebuild from scratch just to go to 18.1 Suggestions? Links to recovery tutorials? Anything? Edit by Clem: Hi Jack, you can try the newer kernel without upgrading to 18.1. Install it from Update Manager -> View -> Linux Kernel, reboot and select it. @shane: have you tried picking a different mirror? @294 Jack Make a GpartedLive CD, boot from it and copy your internal partitions onto another drive. If you’re using a laptop/netbook then use a usb to SATA cable to connect to an external SSD or maybe use a big usb flash drive. If you ruin your installation in some way, simply boot from the GpartedLive CD and then restore your drive state from the external backup/copy that you made earlier. It’s helpful (and sensible) to have separate partitions for your root, home and personal data/media. I’ve been using this method for four years on my desktop, laptop and netbook. It’s a great way of recovering from disasters. RE: 120, 172, 185 Clem, mintbackup now works, allowing me to do a “Backup software selection” successfully. The file produced is 2010 lines long. Thanks, Ralph I’m running the latest version of Bigwig on this version flawlessly. TY all for everything you do! Great work! Continuing a previous issue: viewtopic.php?f=46&t=230685 Now: Here we go again! The latest update stopped at this juncture: W: Failed to fetch 404 Not Found [IP: 208.77.20.11 80] There seems to be a bit of code somewhere that when included causes this as a previous cinnamon-common version loaded OK. Screensaver is off. Kernel not updated. Any suggestions? Edit by Clem: Hi, that indicates an obsolete APT cache. Refresh the cache by pressing the Refresh button in the Update Manager or type “apt update” in a terminal. The reason it can’t find cinnamon-common_3.2.6+serena_all.deb is because this version no longer exists in the repositories, the current version is 3.2.7. @nomadewolf Even though I’ve picked different mirrors, reset to the default mirrors, still no luck updating my primary computer to 18.1 via the update manager (or even being offered the chance). My virtual machine updated fine though. Don’t know why there is such a difference. Edit by Clem: Can you paste the content of /etc/linuxmint/info, and the result of “apt policy mintupdate” and “apt policy mint-upgrade-info”? Thanks Clem – but, I tried that many times as my blog post documents. Again, this time, as before, update ran thru using tethering to an iPhone! Edit by Clem: Oh.. wait a second. You’re using 3G or something like that. I’ve seen this in some countries, some mobile operators cache traffic nationwide and that causes issues. APT is basically given old info by the ISP and fooled into thinking there is no new content. Can you paste the output of “apt update”? 2nd Edit by Clem: Anand, these are called “transparent proxies”. If I remember well, you can force APT to ignore cache info with “sudo apt-get update -o Acquire::http::No-Cache=True”. OK, so now I’m feeling a bit dumb. 😳 😳 I updated with update manager using my tethered iPhone as I once did before. Then, following Clem’s suggestion (thanks again!) I checked the APT cache and after doing that I also found there were some 42 upgrades waiting to be installed. Therefore in terminal I ran apt update after which I ran apt upgrade. Now I hope this saga is over. 😛 Just, why didn’t the update manager show upgrade files were available? Happy New Year to one and all – a bit late but better than never. Edit by Clem: It’s basically your ISP using a transparent proxy. Add this option in /etc/apt/conf.d/, you’ll need it for APT to work properly. If possible also, change ISP.. I love Mint and have used it for years. the only question that I have concerning cinnamon. I use Xfce because it is easier to manipulate the panels. I was wondering if this update will bring more functionality to the panels in cinnamon. Otherwise, great job looking for mint Xfce 18.1 Clem: Thanks for the insight. Mobile proxy update is maybe more up to date than the NTT FTTH LAN. As I’m in Japan every ISP seems to have a transparent proxy – speed marketing perhaps. Both Chrome & Firefox show differing results: Firefox: Chrome: I’ll try “sudo apt-get update -o Acquire::http::No-Cache=True” if the problem arises again. Again many thanks for all the work in creating LM and also helping with my issue. Very pleasant personal interest. I updated all my machines all the way from MATE 17.1 I did not have a single problem whether or not it was a desktop or laptop. I MUST tell everyone reading this that MATE 18.1 is actually faster with better performance than 17.1 In fact MATE 18.1 is the fastest full feature DESKTOP that I have ever used! I have been using Linux professionally and personally since 1997. This is INCREDIBLE. I am retired now but I have been converting customers and friends over to Linux Mint Mate 18.1 WOW this is great. I donated $50 for you and your project. Thanks Patrick Thank you very much for all your hard work! Clem: Sorry, could you tell me where to add your suggested addition to avoid cached data? Folder /etc/apt/conf.d/ doesn’t exist on my LM 18.1 Serena Cinnamon 64-bit. There is /etc/apt/apt.conf.d/ with 12 files. To which should I add ” Acquire::http::No-Cache=True” The updates/upgrade has made Serena verystable once again. Re #262 – The problem with jpg/pdf/epub file properties is specific to the user profile that was upgraded from LM18.0. If I create a new user on these machines, the problem does not appear on the new profile. Not worth getting excited about 🙂 There is a bug when using two monitors (unmirrored). If you elect to allow desktop icons on both monitors (via Desktop Settings), then create a desktop launcher or folder on the non-primary desktop, it will vanish after you log off. Inspecting the Desktop folder in Nemo shows the item is still there but not visible on the monitor. My monitors are not physically identical but have identical resolution settings. Have reported on Launchpad This is possibly the same problem mentioned by Óvári at #280 Clem and team, I am enjoying LM18.1 very much, thank you. Edit by Clem: Thanks Tony, please report this on github though: I upgraded to 18.1 – smooth process, no problems with upgrading. 2 things I have noticed that are annoying but not critical: 1: Boot time is slower by a few seconds – haven’t worked out why yet. 2. Bluetooth control applet has gone, although bluetooth is working fine. Otherwise it’s great, thanks! Re #309 () Thanks Clem, have logged it on github as requested. @296 Ahab – Ah, you make it sound so easy… 😉 So if I manage to somehow make these partition copies to some external media (hopefully my NAS), then I just run the regular upgrade process? Assuming it goes well, then I can simply hang on to the partition copies for a bit. A couple of questions for you and/or Clem: 1 – Does the upgrade process resize my partitions and should I worry about it? 2 – Should it get borked, I simply recopy the partition info back to the same partitions, overwriting as I go, right? See question 1… 3 – I have SDA3 encrypted. How’s that going to impact the process? 4 – Re: support of my new Skylake laptop – Can I simply upgrade to (pick a version) a 4.8.x kernel to get the hardware support? Is there a better one than another – I see three selections under “kernels.” I was hoping that 18.1 would be a 4.8 kernel, as my wireless card (Intel 8260) is not supported very well in 4.4, taking multiple reboots/resets/switch flipping to get it to connect to a WLAN, though once connected, it’s solid. The accelerometer support is far less important than WLAN support. Thanks to all. @311 Jack It is easy, if you’re restoring back to the same drive. If you use this method to make a copy/clone of your installation onto another drive for booting/running then you will probably need to run the Boot Repair utility disk on the copy to fix some Grub references. This is not a big deal. Before I upgraded to 18.1, I cloned my 18 installation over onto a spare drive and did the upgrade on that first, to make sure there were no problems. I use this method all the time and it’s always worked. You’ll need the 32-bit or 64-bit GParted Live CD, easy to get and make. If you want to make bootable clones, you’ll probably need the 32-bit or 64-bit Boot Repair CD to boot from, also out there on the internet, a Google search away. For me, a 21GB root partition takes just over 3 minutes to copy, with the drive plugged into a spare SATA slot. I use SSDs mostly. If you use a NAS or a usb linked drive, it will take longer of course. That’s what coffee is for. If you’re going to be cloning partitions or adding partitions for extra data/media/whatever purposes, I strongly suggest that you edit your /etc/fstab file to use device identifiers instead of partition UUIDs. Then, you can more easily read, edit and understand the changes you make to the fstab. Here’s my fstab as an example: As for your numbered questions: 1 – Upgrading doesn’t make your partition bigger but it will make it a bit more full. As long as it’s less than 90% full that’s ok. Using Gparted Live, you can easily resize your root partition if needed, after you’ve made some room at the end of it. You’ll understand all this when you run Gparted and see the onscreen display. 2 – If you bork your working installation, just delete your root and home partitions then copy the backed up versions back onto your drive. I suggest that you get used to it by practicing with a spare drive so that you learn the menus and see the display in action. Note: You can ‘play’ with the internal Gparted utility but you can’t use it to make a copy of your root or home partition, it won’t let you. It won’t copy a mounted partition and you can’t unmount your root or home. You have to boot from the GParted Live CD for that. 3 – Any encryption on a partition is encryption of the data inside it, for the use of the operating system. This should not affect the copying and moving of the partition by GParted. I suggest that you clone your entire installation and run it to make sure of that. 4 – I don’t know about this. Give it a try, it’s worked for me for four years with no problems apart from sometimes needing to run Boot Repair on it if I want to boot/run a clone. This usually happens when I clone onto a drive that’s a different make/model or size from the original. I recently upgraded from a 60GB SSD to 120GB (by cloning partitions) and had to run the Boot Repair CD on it. After that it worked fine. Updated my laptop without problem, just wondering which kernel to update to now ? (at the moment using 4.4.0-21) 4.4.0-53 or 4.4.0-57 ? Thanks. @311 Jack I’ve just realised: you won’t be able to copy a partition onto your NAS drive, not over your LAN. You need to connect an actual storage device into your computer to do that. If you have a laptop/netbook then you’ll need a usb to SATA cable for an SSD or 2.5″ hard drive (slow but not too bad) or use a large usb memory stick (very slow).
https://blog.linuxmint.com/?p=3185
CC-MAIN-2022-40
refinedweb
22,219
76.32
Is There Ever A Bad Time To Buy Stocks? Since the beginning of this year, I have been on the road telling people that what I warned them about last year at this time is coming to fruition – the U.S. economy has most likely entered a recession. Because it is such a pain to get online when traveling, I often have to rely on the financial news television networks for my market information. Typically, I mute the sound on these stations when I am watching. But what I have noticed over the years is that in those random moments when the sound is on, the preponderance of hosts and guests are telling the viewing audience that now, regardless of when "now" happens to be, is a great time to buy stocks. Chart 1 shows the compound annualized rate of increase of the monthly average level of the S&P 500 stock market index over 10-year (120-month) periods. The data series starts in January 1921 and ends January 2008. In the post-WWII era, with only a few exceptions, the S&P 500 does post increases over 10-year periods. In the post-WWII era, recessions (shaded areas in the chart) do not seem to be a controlling factor with regard to 10-year increases or decreases in the S&P 500 index. That is, over numerous recessions, the S&P 500 index posts increases. Over 30-year spans, the S&P 500 index has always posted positive compound annual growth from January 1921 through January 2008 (see Chart 2). I am aware that the stocks that make up the S&P 500 index are not constant through time, so these gains overstate the actual gains. Nevertheless, if one's holding-period investment time horizon is 10 years, one can usually expect a positive return (not including dividend reinvestment) by purchasing the S&P 500 index. If one's holding-period investment time horizon is 30 years, one can always expect a positive return by purchasing the S&P 500, at least based on 88 years of history. So, with a sufficiently long holding-period investment time horizon, it could be said that it always is a good time to purchase the S&P 500 index in that you will usually achieve a gain. Chart 1 Chart 2 Notice, however, that on a year-over-year basis, there often are declines in the S&P 500 index (see Chart 3). More often than not, these year-over-year declines in the S&P 500 index are associated with recessions. In fact, in only two recessions, 1926 and 1945, did the S&P 500 index not post a year-over-year decline. So, if one is a short-term investor, there clearly are better and worse times to buy the S&P 500 index. It is better to buy after the economy has entered a recession and it is better to sell before the economy has entered a recession. My beef is not with those who are telling you to buy now, given that, in all likelihood, the economy has entered a recession. But I would like to rewind the tape to see whether these same "buy now" experts also were urging you to sell last October. Chart 3 If touts are always telling you now is a good time to buy common stocks, they must be referring to relatively long holding-period investment time horizons. If you are investor with a relatively long holding-period investment time horizon, you don't need the advice of these touts. Your time would be better spent watching Seinfeld reruns rather than watching the financial news networks. If you are a short-term investor, and touts are always telling you that now is a good time to buy stocks, the advice of these touts is incorrect. Your time would be better spent doing your own research on market timing or subscribing to a service that has a track record of accurate market timing (if you can recommend any to me, I would appreciate it) rather than watching the financial news networks. In sum, other than for receiving financial market data and/or for entertainment value, tuning in to the financial news networks is almost always a waste of your time. Paul Kasriel is the recipient of the 2006 Lawrence R. Klein Award for Blue Chip Forecasting Accuracy TweetTweet
http://www.safehaven.com/article/9428/is-there-ever-a-bad-time-to-buy-stocks
CC-MAIN-2018-09
refinedweb
736
65.56
3 December 2012 By clicking Submit, you accept the Adobe Terms of Use. The Flash C++ Compiler (FlasCC) provides a complete C/C++ development environment based on GCC that you can use to compile C/C++ code to target Adobe Flash Player and Adobe AIR. Using flascc, you can port virtually any of your existing C/C++ code to the web. This article contains tips and tricks to help you get started with flascc. Like most GCC-based toolchains, flascc uses GDB as its debugger. It provides a customized version of GDB to debug flascc-compiled C/C++ code that is running within Flash Player. To execute GDB, you must define a FLASCC_GDB_RUNTIME environment variable that contains a fully qualified path to either a standalone debugger version of Flash Player or a web browser in which the debugger version of Flash Player is installed. If the path contains spaces use either double quotes (Windows) or escape spaces with a backslash (Mac). For example: FLASCC_GDB_RUNTIME="/cygdrive/c/Program Files (x86)/Mozilla Firefox/firefox.exe" FLASCC_GDB_RUNTIME= export FLASCC_GDB_RUNTIME=/Users/flex/runtimes/player/11.3/mac/Flash\ Player\ Debugger.app If you are using Firefox you may want to extend or disable the plugin hang detection time to avoid the browser prematurely killing the SWF while you are debugging. For more information on how to do this, see the MozillaZine article on this topic. If your flascc application is running on a remote site (a staging server, for example), set the FLASCC_GDB_RUNTIME environment variable to point to your browser (as described above) and then pass the URL of your remote application to GDB when you start it, for example: gdb When you compile a SWC, you specify a namespace. So, when you debug that SWC, you need to tell GDB which namespace you want using the set as3namespace command. You can debug only one SWC per session. For more information, see the GDB section of docs/Reference.html of your local flascc installation or online. While debugging with GDB, you can use the call command to invoke a function in your code. This can be an arbitrary function in which you are interested, or it can be a function that you have added strictly for debugging purposes. For example: (gdb) call dumpValues() When you rerun a SWF in GDB, you may see a warning like the following: warning: Temporarily disabling breakpoints for unloaded shared library "remote:0.elf" You can safely ignore these warnings. Here is an example of running and rerunning a SWF in GDB: <path>/sdk/usr/bin/gdb call.swf) r Starting program: call.swf 0xdddddddd in ?? () Breakpoint 1, 0xf0000247 in main (argc=0, argv=0x200ff0) at call.c:66 66 int s = 2; (gdb) r The program being debugged has been started already. Start it from the beginning? (y or n) y warning: Temporarily disabling breakpoints for unloaded shared library "remote:0.elf" Starting program: call.swf 0xdddddddd in ?? () Breakpoint 1, 0xf0000247 in main (argc=0, argv=0x200ff0) at call.c:66 66 int s = 2; (gdb) In certain situations, when you run GDB, you will see the following message: No symbol table is loaded.... pending on future shared library load You can safely ignore this warning. GDB does not display ActionScript 3 Strings in the regular list of locals if the String is longer than 80 characters. To view these Strings, use the monitor eval command. For example, monitor eval myLongVariable displays the String value regardless of the length. When you’re done debugging and ready to optimize, consider the following tips. By default, the flascc version of GCC produces ActionScript ByteCode (ABC). This is adequate for most of the development cycle, when fast compile times are what you need. However, for full optimization, use the -emit-llvm or -O4 options when invoking GCC. These options result in LLVM bitcode, which creates a Link Time Optimized (LTO) build. Although it takes longer to compile with these switches, you get a final fully-optimized build with everything compiled as LLVM bitcode. For example, the following command lines create a fully optimized build: gcc test.c -emit-llvm -c -o test.o gcc test.c -O4 -c -o test.o gcc test.c -emit-llvm -emit-swf -o test.swf gcc test.c -O4 -emit-swf -o test.swf When you use GCC to generate a SWC you must use the -emit-swc= option to specify the AS3 package name that will contain the generated code and the internal flascc boilerplate code. This lets you link multiple flascc-generated SWCs into one SWF without any function or class collisions. The following example is from /tutorials/05_SWC/makefile: "$(FLASCC)/usr/bin/g++" -O4 MurmurHash3.cpp as3api.cpp main.cpp -emit-swc=sample.MurmurHash -o MurmurHash.swc For more information, see Building SWCs in docs/Reference.html of your local flascc installation. Although GCC includes many options to minimize output size, the following are the most common techniques: /tutorials/12_Stage3D/exports.txt. The SWF version determines which version of the AIR and Flash Player APIs are available. For example, SWF 17 equates to AIR 3.4 and Flash Player 11.4; if you specify SWF 16, your application will have access to only the AIR 3.3 and Flash Player 11.3 APIs. If, for some reason, you want to target an older SWF version, use the GCC -swf-version= option (the current default is SWF version 17, so to change it to SWF version 16, you'd specify the following: -swf-version=16 For more information on the mapping between SWF version and Flash Player version, see Creating the extension descriptor file. Flascc provides a set of header files that wrap the Flash top-level classes (for example, Number and Array), as well as the Flash class library (for example, flash.display.Stage and flash.display.Stage3D). These wrappers provide access to all methods, properties, and events of the corresponding ActionScript classes. For more information, see Interop Between C/C++ And ActionScript in docs/Reference.html of your flascc installation. The wrappers are also used in tutorials/12_Stage3D/stage3d.cpp. Flascc applications automatically include a simple preloader that displays as the SWF loads. You can create a customized preloader to use with your flascc application. This is explained in the flascc Reference Guide, available in the /docs directory of your flascc installation. Although it is strongly recommended that you use 64-bit Java with flascc, some developers have been able to use 32-bit Java for certain, small applications. In this case, if the machine has limited memory, you may need to reduce the Java heap specification (the default is –jvmopt="-Xmx1500m") to a smaller number, for example, –jvmopt="-Xmx256m". You can do this when invoking GCC in the makefile. For example: gcc -jvmopt="-Xmx256M" hello.c -o hello.exe Conversely, if you are running 64-bit Java and your machine has memory to spare, you may want to increase the Java heap size. 64-bit Java is available from the Oracle software downloads page. Here are some additional tips for using flascc. When you link against a SWC, you must specify the object that will be the Console. You should call CModule.vfs.setConsole(this); before the call to initLib() in the ActionScript file and also include an implementation of write() that looks something like this: public function write(fd:int, buf:int, nbyte:int, errno_ptr:int):int { var str:String = CModule.readString(buf, nbyte); trace(str); // or display this string in a textfield somewhere? return nbyte; } For another example, see Console.as in the 04_Animation tutorial. To specify the SWF dimensions, pass the -swf-size=widthxheight flag to GCC; for example: -swf-size=200x200. To define an import so that you can use Context3D in a function call, use the as3import annotation. The following example snippet is from /tutorials/Example_BulletPhysicsLibrary/bullet.i: void positionAndRotateMesh() __attribute__((used, annotate("as3sig:public function positionAndRotateMesh(mesh:*, rigidBody:*):void"), annotate("as3package:org.bulletphysics"), annotate("as3import:flash.geom.Vector3D"))); When using the SWIG compiler to create SWCs from your C/C++ header files, note the following: std::string, nor any of the C++ standard library classes. Because of this, many uses of std::stringor similar classes in wrappers may not work as expected, and you’ll need to write application-specific typemaps to achieve the desired behavior. You can use the flascc AS3 Wrapper Interface Generator (as3wig) to create a C++ library interface for existing ActionScript 3 ActionScript ByteCode (ABC). For an example, see /tutorials/12_Stage3D/makefile in your local installation, specifically, the following line: sdk/usr/lib/as3wig.jar -i AGAL.abc -o AGAL When your SWF loads, the Console class is the first class to be created and executed. It is an ActionScript class that controls when and how the flascc compiled code is initialized. It is also used by the underlying flascc POSIX implementation as the object that handles read and write requests to the various standard input, output, and error terminal streams. In some cases, you will want to create a custom Console implementation. For example, although you can interact with all of the Flash APIs using the C++ wrappers included in Flash++.h, you might prefer to use pure ActionScript 3 in your Console implementation. For information on the Console class, see the ActionScript API docs in /docs/apidocs/index.html of your flascc installation. For complete information, see /docs/Reference.html of your flascc installation. Flascc does not support fsync(). Instead, close the file, which will commit your changes. The com.adobe.flascc.CModule ActionScript class contains convenience functions for reading and writing to domainMemory. It also manages any flascc-specific global state (for example, the VFS and POSIX interface implementations). Among other methods, note the difference between the following: startAsync()- Calls the libc __start1function which, in turn, causes main()to execute. startBackground()- Creates a background Worker and runs the libc function __start1within that Worker. This method requires SWF version 18 (Flash Player 11.5) or higher. For information on the CModule class, see the ActionScript API docs in /docs/apidocs/index.html of your flascc installation. For further reading, see the Adobe Gaming flascc page. This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License.
http://www.adobe.com/devnet/games/articles/flascc-tips-tricks.html
CC-MAIN-2016-44
refinedweb
1,714
58.28
On Mon, Mar 12, 2012 at 5:21 > read through PEP 395, I'm providing those objections in a > consolidated form here. (Thanks for that.) >) Whatever. There's "practicality beats purity" though, and unmarked directories are quite intuitive and logical to newcomers. In fact, the original package implementation (ni.py, checked in originally with rev 2887:ec0b42889243), __init__.py was optional. At the time we hadn't thought of the use case of "namespace packages" like zope.interfaces and zope.components, where there are multiple distributable "bundles" that install different *portions* of the package. During the meeting it also came up that there are two styles in use for this purpose: multiple distro bundles that install into the *same* directory, or multiple distro bundles installing into different directories (whose parents are all added to sys.path separately). We also came up with the encodings package as a potential namespace package, although it currently doesn't have an empty __init__.py. But hold that thought, there's more that I'll address later. >) We must have explained this badly, because (just like PEP 402, AFAIK) this is *not* how it works. It works as follows: *If* there is a foo.py or a foo/__init__.py anywhere along sys.path, the *current* rules apply. That is, of one of these occurs on an earlier sys.path entry, it wins; if both of these occur together on the same sys.path entry, foo/__init__.py wins. (We discovered that the latter disambiguation must prefer the directory, not just for backwards compatibility, but also to make relative imports in subpackages work right. This is probably the biggest deviation from PEP 402.) And in this case all those foo/ directories *without* a __init__.py in them are completely ignored, even if they come before either foo.py or foo/__init__.py on sys.path. (If __init__.py wants to manipulate its own __path__, that's fine.) *Only* if *neither* foo.py *nor* foo/__init__.py is found *anywhere* along sys.path do we take all directories foo/ along sys.path together and combine them into a namespace package. If there are no foo/ directories at all, the import fails. If there is exactly one foo/, it acts like a classic package with an empty __init__.py. We avoid having to do two scans of sys.path by collecting info about __init__.py-less foo/ directories during the same scan where we look for foo.py and foo/__init__.py; but we collect it in a separate variable. (It occurs to me that this may not be trivial when PEP-302-style finders are involved. That's a detail that will to be figured out later.) So the only backwards incompatibility is that "import foo" may succeed where it previously failed if there is a directory foo/ somewhere on sys.path but no foo.py and no foo/__init__.py anywhere. I don't think this is a big deal. (Note: where I write foo.py, I should really write foo.py/foo.pyc/foo.pyo/foo.so/foo.pyd. But that's such a mouthful...) > I know this bothers you greatly, because you wrote at great length about it in PEP 395. But personally I think that being able to guess the highest package directory given the name of a .py file nested deep inside it is a pretty esoteric use case and I can live with this continuing to be broken (since it is already broken) for the sake of a simpler package structure (no __init__.py files!). > What are implicit package directories buying us in exchange for this > inevitable ambiguity? What can we do with them that can't be done with > explicit package directories? And no, "Java does it that way" is not a > valid argument. Apart from the pitchfork incident referenced in PEP 402, I have had many other complaints about the ubiquitous empty __init__.py files. They may be empty, but they sure take up space in e.g. directory listings or zipfiles. For example, there are 409 empty __init__.py files in the Django 1.4c1 distro, plus 25 more that contain either an empty comment or a blank line. I've also seen __init__.py files with a single rude comment in them, and in my G+ stream I've seen comments on random Python topics making a snide reference to empty __init__.py files. (There are also coding guidelines in some places that prohibit having real code in __init__.py files.) Quite separately, it also gives us an easy way to have namespace packages spread across multiple directories. This is clearly a popular feature, given that there are at least *two* different convenience APIs to make this easy (one in pkgutil.py, another in setuptools). I did a quick search for "import pkgutil" on koders.com and the first 25 hits (of 792) are all declaring namespace packages, many using an awkward idiom using a try/except to import either pkg_resources or pkgutil. This awkwardness really bugs me and being able to eventually drop it is a big draw for me. >" [...] Our *four*...no... *Amongst* our weapons.... Amongst our weaponry...are such elements as fear, surprise.... I'll come in again. >. I understand your frustration at just having analyzed this mess and come up with a solution, only to see it permanently sabotaged before you could even implement it. But it's an existing mess, and if I really have to choose between solving this mess or solving the empty-init mess, I vote for solving the latter. But I would hope that the most common cases are still that the package in fact already exists on sys.path, possibly because it is rooted in the current directory, or because the package has been properly installed. In this case you should have no problem computing the toplevel package implied. The other common case is where the current directory is *inside* the package. I agree this is a bad mess. But does this happen with a typical IDE? It seems more common when using the shell. Anyway, maybe we just have to document more aggressively that this is a bad idea and explain to people how to avoid it. (One of the ways to avoid it would be "add an empty __init__.py to your package directories", since that will in fact still avoid it.) There's also a nasty habit that Django has around packages and parent directories. The Django developers announced at PyCon that they're breaking this habit in Django 1.4. (And they also announced that Django 1.5 will be compatible with Python 3.3!) >. I agree that telling newbies to do *anything* with pkgutil is backwards. >. Please reconsider -- there was at least one important detail in the proposal that you misunderstood. > Also, I consider it a requirement that any implicit packages PEP > include an update to the tutorial to explain to beginners what will > and won't work when they attempt to directly execute a module from > inside a Python package. That's fine. > After all, such a PEP is closing off any > possibility of ever fixing the problem: it should have to deal with > the consequences. Not so gloomy, Nick! There are still quite a few cases that can be detected properly. I think the rule "don't cd into a package" covers most cases. -- --Guido van Rossum (python.org/~guido)
http://mail.python.org/pipermail/import-sig/2012-March/000430.html
CC-MAIN-2013-20
refinedweb
1,241
67.86
Dj. Choice of toolkit: - It’s modular, so you only need to include the “base” file and then pull in the specific bits you’ll actually need. - The official download contains “regular” and “compressed” versions of all the files; you can use the “regular” ones while you’re debugging, and once everything works switch to the “compressed” ones so your users will have faster downloads. - It only adds one object — YAHOO— to the global namespace (this plays nicely with other JS you may be using, and makes it tons easier to debug). - It provides a number of pre-built “widgets” and effects, but is much more oriented toward a “building blocks” style of development. - Philosophically, it makes it much easier to write JavaScript that feels “natural” (for example: their event handler, by default, corrects the scope of functions it attaches, so that thiswill refer to the element which fired the event). - It’s obsessively well-documented, with high-level overviews of each module, complete API references and “cheat sheets”. The example we’re building will pull in things from four modules of YUI: the DOM library, the event library, the connection manager and the animation utility. How AJAX programming works. Good code structure is key: - Get references to the elements in the page which contain the form and which will display the results. - Change the “results” element’s opacity so we can do a nice fading animation on it later. - Hijack the form so we can submit it via AJAX instead of doing a “normal” submission.: - We know: - Because we’re inside an object, we’re building attributes of the object. JavaScript uses the name: valuesyntax you may be familiar with from Python dictionaries, so it’s init: function()…, not init = function()…. - This is actually a completely anonymous function, being built up with function literal syntax and then assigned as the value of the initattribute. If you don’t know what that means, don’t worry; but if you do, keep it in mind — JavaScript can be a very functional language, and that’s often a great strength. - When we add a function to listen for the form’s submission, we add it to the form’s submitevent, not the submit button’s clickevent — this is because in most browsers, it’s possible to submit the form by pressing Enter in a text input field. Listening for submitmeans we always catch form submissions, no matter how they happened. - We’re setting up the function. We’re initialized; now: - First we call. - Then we cycle over the). - We temporarily disable the form’s inputs, by looping over them and setting each one’s. - We fire off the AJAX request. This is handled by first saying that we want to submit this particular form, which is what. Writing the response: - The “results” divwill completely fade out. - Then its contents will be cleared. - Then the new contents will go into it. - Then it will fade in again.. Showing the errors in the form: - Grabs the ddwhich contains the input, by doing getElementByIdusing the name of the field (which is also the name of the attribute in the error message object which contains the error message, and thus is the “thing” we’re getting in this iteration of the loop). - Builds a new ddwith its class set to error, containing the error message, and sets its opacity to zero. - Sets up a “fade in” animation for that dd. - Sticks the new `dd before the container for the form field (usinginsertBefore), and fades it in. And one more thing…. And we’re done.
https://www.b-list.org/weblog/2006/aug/05/django-tips-simple-ajax-example-part-2/
CC-MAIN-2019-04
refinedweb
598
70.73
Draggable toggle-switch component for react React-Switch A draggable toggle-switch component for React. - Draggable with the mouse or with a touch screen. - Accessible to visually impaired users and those who can't use a mouse. - Customizable - Easy to customize size, color and more. - Small package size - 2 kB gzipped. - It Just Works - Sensible default styling. Uses inline styles, so no need to import a separate css file. Installation npm install react-switch Usage import React, { Component } from "react"; import Switch from "react-switch"; class SwitchExample extends Component { constructor() { super(); this.state = { checked: false }; this.handleChange = this.handleChange.bind(this); } handleChange(checked) { this.setState({ checked }); } render() { return ( <label htmlFor="normal-switch"> <span>Switch with default style</span> <Switch onChange={this.handleChange} checked={this.state.checked} </label> ); } } What's the deal with the label tag?. API)' Development You're welcome to contribute to react-switch. To set up the project: - Fork and clone the repository $ npm install $ npm run dev The demo page will then be served on in watch mode, meaning you don't have refresh the page to see your changes.
https://reactjsexample.com/draggable-toggle-switch-component-for-react/
CC-MAIN-2019-35
refinedweb
183
60.11
29407/marking-master-waiting-condition-creating-kubernetes-cluster I'm getting the following logs when i execute the kubeadm init command, have no idea whats wrong.. root@kmaster:/home/wael# sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.145.5 --ignore-preflight-errors=all --node-name kmaster [init] using Kubernetes version: v1.12.2 . . . [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node kmaster as master by adding the label "node-role.kubernetes.io/mas..." [markmaster] Marking the node kmaster as master by adding the taints [node-role.kubernetes.io/mas...] error marking master: timed out waiting for the condition.. Hi@Aryan, I have checked your Jenkins file in ...READ MORE You get this error when your pod ...READ MORE The new version of Kubernetes requires you ...READ MORE This is the CRON function for executing .. To run ./cluster/kube-up.sh, you most likely need compute scope ...READ MORE Not able to access dashboard it says: { ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/29407/marking-master-waiting-condition-creating-kubernetes-cluster?show=29408
CC-MAIN-2021-31
refinedweb
207
51.44
I have a Flash Pro project with numerous items in the Library linked to Class files. I can successfully export a SWC and use the items in Flash Builder. However, how would one override the Class in the SWC with a new one n FB? Currently, to edit the Class, I have to open up the project in Flash Pro, edit the Class and export the SWC again. Only then will I see the changes. When I try to put a new Class in Flash Builder with the same name as the one in the SWC, it will run that new Class but the assets of the Library item are not there. Hopefully I am missing something obvious. This is a working hack but I would still prefer a better way! In Flash Builder I create a Class that is near-to but not exactly the same as the original Class from the SWC. In this Class I make it extend the original Class in the SWC. eg: Class in SWC: SomeClass.as Class in FB: SomeClassExt.as In SomeClassExt I use: public class SomeClassExt extends SomeClass{ } I still need to test this out further but so far so good. Obvious or not. One thing I should mention, at least with my current example I need to remove a listener or the Class fires twice: removeEventListener(Event.ADDED_TO_STAGE,Init); So why do I not remove the Class from the Library item in Flash Pro to begin with? I built all the items and their Classes in Pro. Moving to FB for this project is recent. I can still test the functionality of the Classes in Flash Pro doing it this way and Not have to put them all in the same project folder in FB. Again, I would still prefr a better way to just override the SWC's Class and not have to create new Class Names in FB. This requires a lot of changes to an otherwise functional application.
http://forums.adobe.com/thread/1209429
CC-MAIN-2014-15
refinedweb
331
81.22
The. A website that is not configured to operate in a disconnected state is unavailable in any form if an Internet connection is not available. For instance, Figure 1 demonstrates the type of response you may encounter while trying to view while working without an Internet connection. Fortunately “offline web applications” continue to work for users regardless of Internet connection status to the client. The availability is possible by the new HTML Offline Web Application API (), also known as HTML Application Cache. An offline application is a packaged group of web pages, style sheets, and/or scripts files that are reliably available to the client whether or not a web connection is present. Offline web applications are available through the new HTML Offline Web Application API, also known as HTML Application Cache. Beyond simply serving pages to the user when an Internet connection is unavailable, often an offline application requires storage of user’s information. HTML Web Storage is able to store relatively large amounts of information on the client giving you the ability to save data locally and synchronize with the server as a connection to the web becomes available. The example in this article uses the Application Cache and Web Storage APIs together to build an application that works offline to store user information and automatically synchronize with the server when available. What Is the Application Cache? As stated above, an offline application is a packaged group of web pages, style sheets and/or scripts files that are saved on the user’s machine in the application cache. When a request for a file from the application is initiated, instead of requesting the file from the web server, the file is served from the application cache. In order to keep the application packaged and versioned correctly, a file called the application manifest maintains a master list of files in the application. When connected to the web, the manifest is checked for updates and any new versions of the application’s files are downloaded in the background for the next visit to the page. Pages loaded into the application cache are served from the cache whether or not a connection to the Internet is available. Browser Support As with any new web technology, the question of browser support is often a determining factor for widespread use by web developers. The good news is that the latest versions of the mobile web browsers support offline applications along with some very early releases of many desktop browsers. Unfortunately, Internet Explorer support is not scheduled until the release of version 10. Table 1 details the browser support for offline applications for a wide array of desktop and mobile browsers. Anatomy of an Offline Application There are a number of elements working in concert that enable an offline application to operate as intended. To get a quick understanding of the files, features and APIs involved in creating an offline application, review the different building blocks listed in Table 2. Each of the elements described in Table 2 play a crucial role in serving, enabling and maintaining offline applications. Understanding Cache Manifest The application manifest file acts as the master list of files for the offline application. The manifest is a simple text file that adheres to a few conventions as required by the Application Cache API. A typical manifest file may resemble the following example: CACHE MANIFEST # version 1 CACHE: /home.htm /contact.htm /images/logo.png /styles/global.css /script.js FALLBACK: /events.aspx /events.htm NETWORK: /customer/list All files in the manifest are downloaded and stored in the application cache together. If a single file listed in the cache encounters a problem during transmission from the server to the client then an error is thrown and none of the files listed in the manifest are loaded into application cache. The all or nothing rule allows you to confidently rely on the existence of the application’s files in the application cache if the manifest downloads without error. The following explanation dissects the example line-by-line to help you fully understand the mechanics of the manifest file. Required and Implied Elements The first line of any application manifest file must read CACHE MANIFEST. CACHE MANIFEST This is a strict rule as you may not have whitespace, comments or any other information on the first line of the manifest. After leading with that term, then you have some flexibility as to how you craft the manifest file. The next section is the CACHE: section. CACHE: /home.htm /contact.htm /images/logo.png /styles/global.css /script.js The CACHE: section includes the list of all HTML, CSS, images and scripts files that make up the application. Files listed in the CACHE section may include any files regularly found as a part of a web application. You may reference any application files in the manifest. In this context, “application files” include static HTML pages, CSS files, images and scripts as well as server processed pages or files. The manifest only cares about the resulting file as served to the browser. The heading of CACHE: is optional. Any file listed in the manifest file that does not appear under any other section heading is assumed to be a file to load into the application cache. This flexibility may prove helpful if you choose to generate your manifest file programmatically. However, for consistency and clarity, consider using the CACHE: heading in order to make your manifest easily understandable. Maintaining Application Versions The second line in the manifest listing is a comment. Any line that begins with the hash (#) symbol is a comment in the manifest file. # version 1 This comment, however, serves a specific purpose. In the Understanding Application Cache Event Lifecycle section later in this article, you’ll learn how the manifest file is used as the only point of reference to trigger changes in the application. In other words, if you save an update to the text in an HTML file and fail to make a change to the application manifest, then that change is never sent to the client. You must make a change in the manifest file so the update events fire for the application cache, which triggers a re-download of all the application’s files. Therefore, if you make a content change to an application file, you must have a mechanism for introducing change into the manifest file to prompt the client to re-download the contents of the manifest. A “version” comment works perfectly for this purpose. The FALLBACK Section Consider a page in the application that listed upcoming events as served from a database on the website. While this page is not accessible without an Internet connection, you want your users to see something other than an “unable to connect” error page (Figure 1) when they click on links to the Events page. The FALLBACK section maps server resources to alternatives available in the application cache. FALLBACK: /events /events.htm /images/headshots/ /offline-headshot.png The FALLBACK section creates a mapping of alternative pages to serve to the user if a request to the original file fails or the computer is working offline. Associations between actual paths on the website and the fallback replacement are made by listing the full relative path to each location separated by a space. While there are no wild cards allowed in fallback mapping definitions, URL patterns are respected in the FALLBACK section. For instance, if this website included thousands of headshots for every individual listed on the site, then you would not want to add each headshot to the manifest file. Adding all these files would bloat the payload of the application to include images that the user may never use. Instead, the FALLBACK section examines the /images/headshots/ path and knows that any path that includes /images/headshots/ is a part of the FALLBACK pattern and is served the mapped offline resource instead. The NETWORK Section The intent around the application manifest file is to define clear boundaries around the given application to ensure all the required resources are available on the client when there is no access to the Internet. While providing a cached option to the user is often possible, not all server resources are candidates for caching. Search pages, dynamically constructed lists, user input forms and any other page that simply does not operate without the web server is not available for caching. The NETWORK section creates a whitelist of URLs that are excluded from control of the application cache. NETWORK: /customer/list This example lists the URL to the customer list page which is built from the database. Even though this URL is not listed in the CACHE section, without the entry in the NETWORK section, any requests for URL that are not cached are cancelled by the browser. Adding the path to the NETWORK officially excludes the given URL from the browser’s control to maintain the application cache and directly sends all requests to the server. While wildcards are not allowed in the NETWORK section, the asterisk character (*) will whitelist any URL that is not explicitly listed in the CACHE section of the manifest file. Using the asterisk would change your NETWORK section to: NETWORK: * Mime Type and Encoding In order to be processed correctly by the browser, the application manifest file must serve with the appropriate mime type and content encoding. Manifest files must have the mime type of text/application-manifest. Further, the content encoding of the file must be set to UTF-8. You may configure your web server to serve all files with the .appcache extension with the right mime type and content encoding. Alternatively, you may choose to set the mime type and encoding on the server for individual files. The example in this article configures each file individually. Referencing the Manifest from the HTML Page Once the manifest includes all the appropriate sections and is set to serve with the right mime type and content encoding, the manifest is ready to reference an HTML page. To reference a manifest file in an HTML page, you use the new manifest attribute of the html element to point to the manifest file. If your manifest is a static file you may reference the file with the established .appcache file extension convention: <html manifest="manifest.appcache"> When the browser recognizes a value for the manifest attribute then it knows to treat the page as an HTML offline application and initiates the checking event against the manifest file. Understanding Application Cache Event Lifecycle The Application Cache API uses a well-defined event lifecycle in order to help keep track of the state and status of application files. When a page pointing to a manifest file is encountered for the first time, the browser checks the application manifest to determine which files must be downloaded and added into the application cache. As each file is served to the client, the downloading and progress events fire until the entire contents of the files listed in the manifest are successfully loaded into application cache. Any subsequent requests to the host page trigger the checking event to the manifest file. If the computer is offline then the contents of the application cache are used. If a connection is present then the checking event is fired and the manifest is examined to see if any changes are present. If there are no changes to the manifest, then once again, the contents of the application cache are used to render the application. However, if the manifest is changed then the files listed in the manifest are downloaded once again and the process repeats. Table 3 lists in detail each application cache event and the context in which it fires. Understanding the Difference between HTTP Caching and HTML Application Cache As you develop HTML offline applications, an important distinction to have a clear understanding of is the difference between HTTP caching and application cache. HTTP (or browser) caching is the mechanism of saving a copy of a web page, image, script or style sheet to the browser cache when you visit a web page. This behavior is desirable and essential so web servers across the world are not inundated with unnecessary requests for files that are already present on a user’s machine. The files cached in the browser cache are often set to have relatively long (days, weeks and even sometimes longer) expiration dates in order to help improve the web’s performance. The purpose of the application cache isn’t to save the server requests for performance reasons, but to make the application’s files available even when working offline. The problem is these two caching mechanisms may collide, giving you unexpected results from your application. Consider a page configured to be available offline, but is cached in the browser cache for 24 hours. If you push a change to your offline application, it would take a full day before you were able to see the change to the page. Therefore you often need to disable browser caching on files saved in the application manifest in order to facilitate accurate file synchronization. While no server-side framework technologies are required to enable HTML offline applications, the example featured in this article uses ASP.NET MVC in order to render the HTML and manifest files with browser caching disabled on the individual files. The approach of disabling caching on a file-by-file basis on the server is used here for two reasons. The reason to use the server is that the use of HTML META tags is ineffective. There are META tags that purportedly are able to halt browser caching, but unfortunately these tags are often ignored by modern web browsers and therefore the desired result is not consistent or reliable. The reason to configure the caching rules on a file-by-file basis is to make the sample code portable. Rather than requiring you to make a number of changes in IIS before running the associated sample application with this article, programmatic configuration makes the example portable and able to run without any necessary web server customizations. What Is Web Storage? Web Storage takes client-side data persistence to the next level beyond the traditional HTTP cookie. In the past, the only option available to web developers to save information on a user’s machine was to write a cookie. While cookies are sufficient in many cases, often developers required a persistence mechanism that offered a higher capacity and more structured API. Whereas the contents of a cookie are transmitted with each HTTP request and response, data saved in Web Storage remains exclusively on the client. The "client-only” nature of Web Storage data opens up the possibility to save much more information on the client than was ever possible before using HTTP cookies. While capacity limits may vary by browser and user settings, in general, Web Storage capacity is often approximately 5 MB on a user’s machine. Web Storage is available under two different modes, local storage and session storage, as detailed in Table 4. While the scope and lifecycle of data stored in local vs. session storage varies, the API interface for accessing either type is exactly the same. Browser Support Web Storage enjoys support by a wide array of desktop and mobile browsers, even including Internet Explorer. Table 5 details the support landscape for the Web Storage API. Introducing the "AlwaysNote” HTML Offline Application Consider a field sales staff that regularly interacts with potential and existing clients. During the sales process, the salespeople must keep notes about each customer and need a system to collect feedback. You are tasked with implementing a web application to manage the new data and existing customer data. A critical obstacle you must overcome is the fact that often the salespeople are in locations with little to no Internet connectivity available. In order to meet the requirements of your project, you must create an application that is capable of the following: "AlwaysNote” is the name given to the application implemented in this article that fulfills each of the stated requirements. AlwaysNote responds to the online and offline status of the computer and displays a green "online” status message in the upper right corner of the form as depicted by Figure 2. When the computer is offline, the status message updates to a red "offline” message to indicate to the user that they are now working in offline mode as shown in Figure 3. While working in a connected mode, user data is first saved into local Web Storage and then sent to the server as shown in Figure 4. When disconnected, as demonstrated by Figure 5, the data is stored only in local storage and will synchronize with the server as soon as an outside connection is available. When a connection to the web is once again available, the application handles the online event and then sends all the records added or edited since the unavailabiltiy of the outside connection. Figure 6 demonstrates how the application responds to the recent availability of a network connection. The AlwaysNote application uses a number of technologies together to support the offline application. Table 4 lists each technology and describes the role they play in the application. When the user clicks on the Events link, the offline version of the events file is served from the application cache instead of sending the request to the server when working offline, as shown in Figure 7. As you build up the code in this article you’ll add a number of files into a Visual Studio solution. Table 5 details the purpose of each file discussed in this example and Figure 8 depicts the structure in the Solution Explorer. The Database The database for AlwaysNote is made up of a single table. The Customer’s table includes CustomerID, Name and Note to store customer information. Figure 9 shows the configuration of the table in SQL Server. Implementing Server Logic Beginning at the lowest level of the application, the server logic implemented for AlwaysNote is responsible for writing data to the database, processing persistence requests from the client and configuring pages for use as an offline application. To begin, let’s review data access implementation found in the customer repository. Customer Repository The CustomerRepository class is responsible for doing the actual work of updating the database, which in this case, is the Customer table. Both the Update and Add methods use standard LINQ to SQL syntax to commit changes to the database. You can see the full code listing for the CustomerRepository class in Listing 1. Customer Controller To facilitate interaction between the view and the model layer there are a few support classes used to act as containers for messages between layers. The CustomerInputModel is used to encapsulate the data coming from the UI layer into the controller. public class CustomerInputModel { public string Name { get; set; } public string Note { get; set; } public int ID { get; set; } public string Key { get; set; } } The Name and Note properties are self-explanatory. The ID property holds the value of the CustomerID record in the Customer table. The Key property contains the index number of the record as it is entered into the local storage on the client. Tracking these two values independently makes synchronization a trivial task. While the CustomerInputClass is responsible for carrying data coming from the UI, the CustomerViewModel is used to model data returned to the view. In this class the ID and Key properties fulfill the same purpose as described in the CustomerInputClass. public class CustomerViewModel { public string ID { get; set; } public string Key { get; set; } } The next step is to implement the CustomerController, which uses the CustomerInputModel and CustomerViewModel to interact with the view. The Save method accepts an instance of the CustomerInputModel and attempts to save new or updates to customer information. If an ID value is present, then the values are updated, otherwise a new customer record is created. A Key value is always available since the data is saved first on the client. Whether the date is new or updated the record’s ID is returned to the view. When the web page recognizes a response from the server, then the Key is used to look up the data saved in local storage and the ID value is updated to make sure the data stays in sync. Listing 2 shows the code for the CustomerInputModel, CustomerViewModel and CustomerController classes. Home Controller The HomeController is responsible for returning the appropriately formatted view for the application. In this instance, browser caching is disabled for both the index and manifest pages. The manifest is further configured to return the text/cache-manifest mime type (through the ContentType property) and the page encoding is set to UTF-8 via the ContentEncoding property.(); } } Remember, the controller actions are used in favor of a static file, in this case to disable browser caching. Listing 4 shows the full code for the HomeController class. The View The HTML structure of the view includes the online status container, HTML form elements, navigation and elements to provide user feedback on the page. Listing 4 shows the full code for the index view. Getting Started with the Script The script for the view begins by handling the document ready event using the traditional jQuery syntax. All the other functions discussed in this article are scoped inside the ready function (as indicated by the ellipsis at the end of the code snippet). Before discussing the meat of the script, first familiarize yourself with some of the utility functions used throughout the script. $(function () { var customerIndex = 0; function logMessage(message) { $("#log").append("<li>" + message + "</li>"); } function clearUI() { $("#name, #note").val(""); $("#log").html(""); } ... }); The customerIndex variable’s purpose is to keep track of the locally-stored index of the current customer record. Many of the functions throughout the script reference this variable. The logMessage function is used to display messages to the user by adding items to an unordered list. The clearUI function removes any entered data from the form elements and clears the log list as filled out by logMessage. Detecting an Internet Connection Detecting an Internet connection is possible by querying the read-only window.navigator.onLine property. This property gets its value during the browser’s monitoring of network connectivity. While the function simply returns the value of a single property, wrapping the call to the value may prove helpful during development. Once wrapped up, you can override the property’s actual value to test how your script is working in an offline mode without having to disable your wireless networking hardware. Further, you may choose to extend support for network detection as discussed in the sidebar, Rock Solid Connectivity Detection. function isOnLine() { return navigator.onLine; } Now that you are able to detect the status of the Internet connectivity, the next step is to update the page to report the connection status. The reportOnlineStatus function uses the value from the isOnLine function to decide how to change the UI to reflect the network availability. Depending on the presence of an Internet connection, the DIV is updated to either read “Online” with a green background or “Offline” with a red background, as applied via a CSS class. function reportOnlineStatus() { var status = $("#onlineStatus"); if (isOnLine()) { status.text("Online"); status. removeClass("offline"). addClass("online"); } else { status.text("Offline"); status. removeClass("online"). addClass("offline"); } } Now that the page knows how to for respond to connectivity changes, the next step is to handle the events that fire when the browser recognizes the connection state is changed. Responding to Connectivity Changes There are two events that the browser fires when a connection to the web changes availability. By subscribing to the online event, the page can easily update the UI to reflect the presence of an Internet connection and then send all the data entered while the connection was unavailable to the server. When the application is unable to communicate with the web, then the UI is updated. No other actions are required at this point because the save logic is responsible for saving the data locally. window.addEventListener("online", function (e) { reportOnlineStatus(); saveToServer(); }, true); window.addEventListener("offline", function (e) { reportOnlineStatus(); }, true); Updating the Application Cache When a new version of the application is made available, then the manifest is updated in order to push a change notification to the client. The updateready event fires once the manifest is checked for changes and all the files listed in the manifest are successfully downloaded. The event handler for the updateready event is available on window.applocationCache.onupdateready. When this event fires then you must swap in the new version of the files and then reload the page. window.applicationCache.onupdateready = function (e) { applicationCache.swapCache(); window.location.reload(); } Swapping the cache is necessary because the version of the files the user sees at the time of download is the old version of the files. Once the new files are available on the client, they must be loaded into the cache. Once the new page is loaded into the cache then the browser must read the latest version of the page to display to the user. The easiest and most unobtrusive way to accomplish reloading the page is to programmatically reload the page in JavaScript. Saving Changes The first step in saving changes is to get an instance of the JSON object used to model data entered through the form. The interface for the model on the client is exactly the same as the CustomerInputModel on the server, except for the IsDirty property. The IsDirty property is switched to true when changes are made to a particular customer record so that only changes to edited (or new) records are sent to the server for processing. The getModel function first defines the JSON object with the appropriate interface and then attempts to get an instance of that object from local storage based on the selected customer index value. Since the data is stored in local storage as a flat string, if a record is found then the string is parsed into a full JSON object by using the JSON.parse function. If a record is not found at the index location then the empty model is returned to the caller in order to ensure a working instance of the object. function getModel(index) { var model = { Name: "", Note: "", IsDirty: false, Key: "", ID: "" }; if (localStorage[index] != null) { model = JSON.parse(localStorage[index]); } model.Key = index; return model; } The next two functions are responsible for doing the explicit work of saving any new changes first to the client and then to the server. The saveToLocal function begins by calling getModel and passing in the current customer index value. Once the model is available then all the latest values for each property are read from the form elements and placed into the model object. Next, the object is marked as dirty so the procedure that sends the changes to the server will know to send this object to the server. Then the JSON object is serialized into a flat string and then saved into local storage using the setItem function using the current customer index value as the key. Finally, the user is notified that the information is saved locally by calling logMessage to add an item to the unordered list on the page. function saveToLocal() { var model = getModel(customerIndex); model.Name = $("#name").val(); model.Note = $("#note").val(); model.IsDirty = true; localStorage.setItem(customerIndex, JSON.stringify(model)); logMessage("'" + model.Name + "' saved locally."); } When the page is ready to take the data saved in local storage and send those changes to the server, the saveToServer function is called. The first operation in saveToServer is to loop through each of the items stored in local storage. During each loop iteration, the current model is extracted from local storage. Then the object is evaluated to see if IsDirty is set to true, which signifies that the object requires server processing. Figure 10 demonstrates how the JSON data appears when stored in local storage as a string. If the object requires server processing then the jQuery post function is used to send the JSON object to the Save action on the CustomerController. When a response is returned from the server then the Key value is used to extract the model from local storage, set IsDirty to false and update the ID value from what came from the server. Then the JSON object is again serialized and saved into local storage. Finally, the UI is updated to notify the user that the changes are saved on the server."); }); } } } Tying each of these functions together is the logic implemented in the save button’s click handler. First, the browser’s support for local storage is detected by using Modernizr. If local storage support is available then the latest changes are saved by calling the saveToLocal function. Next, if a connection to the Internet is available, then any changes marked as IsDirty are sent to the server by calling the saveToServer function. If local storage isn’t available then, for example purposes here, the user is simply alerted that the current browser is not supported. In the real world you may want to implement a more user-friendly approach. $("#save").click(function () { if (Modernizr.localstorage) { saveToLocal(); if (isOnLine()) { saveToServer(); } } else { alert("AlwaysNote requires local storage."); } }); Displaying Customer Information In order to show the latest customer information to the user, the showCustomer function begins by calling getModel to extract the selected customer model. If the customerIndex is pointing to a record position that doesn’t have a value, then the model returns null. In the event of a new record then the UI is cleared of any previously entered values. If, however, the customer record does exist then the current values are placed into the page’s form elements, which displays the latest values to the user. function showCustomer() { var model = getModel(customerIndex); if (model == null) { clearUI(); } else { $("#name").val(model.Name); $("#note").val(model.Note); } } Handling UI Navigation When the user clicks on the “next” button then the customer index is incremented by one and then the page is instructed to show the current customer to the user. $("#next").click(function () { customerIndex++ showCustomer(); }); When the back button is clicked, then the customer index is decremented by one (as long as the index is not currently on the first record) and then the current customer is shown to the user. $("#back").click(function () { if (customerIndex > 0) { customerIndex--; showCustomer(); } }); Listing 4 shows the full code listing for the script. The Manifest File Now that the HTML and script is implemented giving AlwaysNote the required behavior and structure, the next step to make the application available offline is to build and reference the manifest file. The manifest includes a reference to the home view, style sheet, jQuery and Modernizr script files as well as the fallback file required to show users if they attempt to navigate to the events page. The FALLBACK section creates the mapping from the server-side events to the client-side CACHE MANIFEST # version 1 CACHE: / /Content/style.css /Scripts/<a href="">modernizr-1.7.min.js</a> /Scripts/<a href="">jquery-1.5.1.min.js</a> /events.htm FALLBACK: /events.aspx /events.htm NETWORK: * You can see the full code listing for the manifest file in Listing 5. Referencing the Manifest File from the HTML Page Once the manifest is crafted to include all the necessary files for the application, the next step is to reference the manifest from the HTML page. To create the link, you use a new manifest attribute off the html element to point to the manifest file: <html manifest="home/manifest"> In this case, the manifest is served from the Manifest action off the HomeController. Listing 4 shows the full code for the index view, and Listing 7 shows the style sheet for the view. Fallback Page Recall from the manifest file that the server events page is mapped to the local events page in the FALLBACK section of the application manifest. In order for the application to work properly, you must also include the fallback page in the CACHE section of the manifest. You can see the full code listing for the events fallback page in Listing 6. Conclusion Together, HTML Web Storage and Application Cache create a compelling and viable path to creating offline web applications. While some hurdles still exisg with Internet Explorer not yet supporting the application cache, nearly every other modern browser is capable of providing the environment you need in order to build disconnected applications that don’t skimp on rich features. Listing 1: Customer repository (CustomerRepository.cs) using System.Collections.Generic; using System.Linq; namespace AlwaysNote.Models { public class CustomerRepository { public void Update(int id, string name, string note) { Customer customer = null; using (AlwaysNoteDataContext db = new AlwaysNoteDataContext()) { var query = from c in db.Customers where c.CustomerID == id select c; customer = query.SingleOrDefault<Customer>(); customer.Name = name; customer.Note = note; db.SubmitChanges(); } } public int Add(string name, string note) { Customer customer = null; using (AlwaysNoteDataContext db = new AlwaysNoteDataContext()) { customer = new Customer(); customer.Name = name; customer.Note = note; db.Customers.InsertOnSubmit(customer); db.SubmitChanges(); } int id = customer.CustomerID; return id; } } } Listing 2: Customer controller (CustomerController.cs) using System.Web.Mvc; using AlwaysNote.Models; namespace AlwaysNote.Controllers { public class CustomerInputModel { public string Name { get; set; } public string Note { get; set; } public int ID { get; set; } public string Key { get; set; } } public class CustomerViewModel { public string ID { get; set; } public string Key { get; set; } } public class CustomerController : Controller { public ActionResult Index() { return View(); } public ActionResult Save(CustomerInputModel model) { CustomerRepository repository = new CustomerRepository(); int id = 0; if (model.ID > 0) { id = model.ID; repository.Update(id, model.Name, model.Note); } else { id = repository.Add(model.Name, model.Note); } CustomerViewModel vm = new CustomerViewModel(); vm.ID = id.ToString(); vm.Key = model.Key; return Json(vm); } } } Listing 3: Home controller (HomeController.cs) using System.Web.Mvc; namespace AlwaysNote.Controllers {(); } } } Listing 4: Index page (Index.cshtml) @{ Layout = null; } <!DOCTYPE html> <html manifest="home/manifest"> <head> <title>AlwaysNote: Customer Note System</title> <link rel="Stylesheet" href="/Content/style.css" type="text/css" /> <script src="/Scripts/<a href="">modernizr-1.7.min.js</a>" type="text/javascript"></script> <script src="/Scripts/<a href="">jquery-1.5.1.min.js</a>" type="text/javascript"></script> <script> $(function () { var customerIndex = 0; $("#back").click(function () { if (customerIndex > 0) { customerIndex--; showCustomer(); } }); $("#next").click(function () { customerIndex++ showCustomer(); }); $("#save").click(function () { if (Modernizr.localstorage) { saveToLocal(); if (isOnLine()) { saveToServer(); } } else { alert("AlwaysNote requires local storage."); } }); function isOnLine() { return navigator.onLine; } function getModel(index) { var model = { Name: "", Note: "", IsDirty: false, Key: "", ID: "" }; if (localStorage[index] != null) { model = JSON.parse(localStorage[index]); } model.Key = index; return model; } function saveToLocal() { var model = getModel(customerIndex); model.Name = $("#name").val(); model.Note = $("#note").val(); model.IsDirty = true; localStorage.setItem(customerIndex, JSON.stringify(model)); logMessage("'" + model.Name + "' saved locally."); }"); }); } } } function logMessage(message) { $("#log").append("<li>" + message + "</li>"); } function clearUI() { $("#name, #note").val(""); $("#log").html(""); } function showCustomer() { var model = getModel(customerIndex); if (model == null) { clearUI(); } else { $("#name").val(model.Name); $("#note").val(model.Note); } } function reportOnlineStatus() { var status = $("#onlineStatus"); if (isOnLine()) { status.text("Online"); status. removeClass("offline"). addClass("online"); } else { status.text("Offline"); status. removeClass("online"). addClass("offline"); } } window.applicationCache.onupdateready = function (e) { applicationCache.swapCache(); window.location.reload(); } window.addEventListener("online", function (e) { reportOnlineStatus(); saveToServer(); }, true); window.addEventListener("offline", function (e) { reportOnlineStatus(); }, true); if (isOnLine()) { saveToServer(); } showCustomer(); reportOnlineStatus(); }); </script> </head> <body> <section> <div id="onlineStatus"></div> <input type="text" placeholder="Name" id="name" /> <textarea id="note" placeholder="Note"></textarea> <div id="command"> <input type="button" value="&laquo;" id="back" /> <input type="button" value="&raquo;" id="next" /> <input type="button" value="Save" id="save" /> </div> <a href="/events">Events</a> <a href="/customer/list">Customers</a> <ul id="log"></ul> <div id="version">Version 1</div> </section> </body> </html> Listing 5: Cache manifest (Manifest.cshtml) CACHE MANIFEST # version 1 CACHE: / /Content/style.css /Scripts/<a href="">modernizr-1.7.min.js</a> /Scripts/<a href="">jquery-1.5.1.min.js</a> /events.htm FALLBACK: /events /events.htm NETWORK: * @{ Layout = null; } Listing 6: Static Events page (events.htm) <!DOCTYPE html> <html xmlns="<a href=""></a>"> <head> <title>Events</title> <link rel="Stylesheet" href="/Content/style.css" type="text/css" /> </head> <body> <section> <h1>Events</h1> <p> The event listings are only available when you are connected to the internet. Please call us at (800) 555-5555 to hear our list of upcoming events. </p> <div id="version">Version 1</div> </section> </body> </html> Listing 7: Cascading style sheet (style.css) body { font-family:Segoe UI,Arial,Helvetica,Sans-Serif; background-color:#333; } section { position:relative; padding:10px; width:250px; border:5px solid #ccc; background-color:#fff; -moz-border-radius: 10px; border-radius: 10px; margin-left:auto; margin-right:auto; margin-top:25px; } section a { font-size:.7em; } section a, section a:link, section a:hover, section a:visited, section a:active { color:#999; text-decoration:none; } input,textarea { font-family:Segoe UI,Arial,Helvetica,Sans-Serif; } input { display:block; } #onlineStatus { position:absolute; right:0px; top:0px; font-size:.7em; padding:6px; -moz-border-radius-bottomleft: 10px; border-bottom-left-radius: 10px; color:#fff; } #command input { display:inline; } .online { background-color:#060; } .offline { background-color:#900; } #log li { font-size:.7em; } #version { font-size:.7em; color:#ccc; }
https://www.codemag.com/article/1112051
CC-MAIN-2020-40
refinedweb
6,237
53.1
Procatopus similis Ahl 1927 Kumba, Cameroon. Photo: Courtesy of Ed Pürzl Supé, Cameroon. Photo: Courtesy of Ed Pürzl Muyuka, Cameroon. Photo: Courtesy of Ed Pürzl Procatopus sp. Edéa. Photo: Courtesy of Ed Pürzl Male, commercial import from Cameroon. Photo courtesy of Ed Pürzl. Wild male from a commercial import into the USA in 2003. Photo courtesy of Tony Terceira. Mundemba. Wild male from commercial shipment to the USA 2004. Photo courtesy of Tony Terceira. P.sp.Yabassi was collected by Paul Blowers. He observed that a lot of Procatopus are found just down stream of where the locals process the sago trees, pounding the pith in the river shallows with the fish feeding on the debris. Apparantly these were so abundant that they couldn't lift the net from the water. First reported import to the BKA was from David Blair on the 8th August 1970. They were imported under the name P.glaucicaudis. P.sp.Yabassi produced many belly sliders. The best method of producing healthy fry was found to be by aerating the container where the eggs are stored.
http://www.killifish.f9.co.uk/Killifish/Killifish%20Website/Ref_Library/Procatopus/Proc.similis.htm
crawl-002
refinedweb
181
77.94
zeromq 4.2.1 Interface to the C ZeroMQ library To use this package, run the following command in your project's root directory: ZeroMQ zeromq.org "ZeroMQ in a hundred words" ." -quote straight from: Usage can be derived by translating the guide above using the interface(s) here. Alternatively some wrapper implementations are listed below. Usage: import deimos.zmq.zmq; link your program with zmq, and Zap, Pow! (to quote the site) it works. Implementations: - Registered by Matt Soucy - 4.2.1 released 3 years ago - D-Programming-Deimos/ZeroMQ - github.com/D-Programming-Deimos/ZeroMQ - LGPL v3 - Authors: - - Dependencies: - none - Versions: - Show all 21 versions - Download Stats: 24 downloads today 75 downloads this week 490 downloads this month 42147 downloads total - Score: - 4.4 - Short URL: - zeromq.dub.pm
https://code.dlang.org/packages/zeromq/4.2.1
CC-MAIN-2020-34
refinedweb
131
61.22
What about existing libraries? The quickest way to achieve the benefits of integrating React with D3 is to use a library. A collection of components with pre-built charting visualizations. Plug it into your app, move on with life. Great option for basic charts. I recommend it dearly to anyone who comes to me and asks about building stuff. Try a library first. If it fits your needs, perfect! You just saved yourselves plenty of time. Where libraries become a problem is when you want to move beyond the library author's idea of How Things Are Done. Custom features, visualizations that aren't just charts, disabling this or that default behavior ... it gets messy. That's why I rarely use libraries myself. Often find it quicker to build something specific from scratch than figuring out how to hold a generalized API just right. But they're a great first step. Here's a few of the most popular React & D3 libraries 👇 List borrowed from a wonderful Smashing Magazine article, because it's a good list. Victory.js React.js components for modular charting and data visualization Victory offers low level components for basic charting and reimplements a lot of D3's API. Great when you need to create basic charts without a lot of customization. Supports React Native. Here's what it takes to implement a Barchart using Victory.js. You can try it on CodeSandbox const data = [{ quarter: 1, earnings: 13000 },{ quarter: 2, earnings: 16500 },{ quarter: 3, earnings: 14250 },{ quarter: 4, earnings: 19000 },]const App = () => (<div style={styles}><h1>Victory basic demo</h1><VictoryChart domainPadding={20}><VictoryBar data={data}</VictoryChart></div>) Create some fake data, render a <VictoryChart> rendering area, add a <VictoryBar> component, give it data and axis keys. Quick and easy. My favorite feature of Victory is that components use fake random data until you pass your own. Means you always know what to expect. Recharts A composable charting library built on React components Recharts is like a more colorful Victory. A pile of charting components, some customization, loves animating everything by default. Here's what it takes to implement a Barchart using Recharts. You can try it on CodeSandbox const data = [{ quarter: 1, earnings: 13000 },{ quarter: 2, earnings: 16500 },{ quarter: 3, earnings: 14250 },{ quarter: 4, earnings: 19000 },]const App = () => (<div style={styles}><h1>Recharts basic demo</h1><BarChart width={500} height={300} data={data}><XAxis dataKey="quarter" /><YAxis dataKey="earnings" /><Bar dataKey="earnings" /></BarChart></div>) More involved than Victory, but same principle. Fake some data, render a drawing area this time with <BarChart> and feed it some data. Inside the <BarChart> render two axes, and a <Bar> for each entry. Recharts hits a great balance of flexibility and ease ... unless you don't like animation by default. Then you're in trouble. PS: Recharts v2 is currently in beta. The API of how it works might soon change Nivo nivo provides a rich set of dataviz components, built on top of the awesome d3 and Reactjs libraries. Nivo is another attempt to give you a set of basic charting components. Comes with great interactive documentation, support for Canvas and API rendering. Plenty of basic customization. Here's what it takes to implement a Barchart using Nivo. You can try it on CodeSandbox const data = [{ quarter: 1, earnings: 13000 },{ quarter: 2, earnings: 16500 },{ quarter: 3, earnings: 14250 },{ quarter: 4, earnings: 19000 },]const App = () => (<div style={styles}><h1>Nivo basic demo</h1><div style={{ height: "400px" }}><ResponsiveBar data={data} keys={["earnings"]}</div></div>) Least amount of effort! You render a <ResponsiveBar> component, give it data and some params, and Nivo handles the rest. Wonderful! But means you have to learn a whole new language of configs and props that might make your hair stand on end. The documentation is great and shows how everything works, but I found it difficult to know which prop combinations are valid. VX vx is collection of reusable low-level visualization components. vx combines the power of d3 to generate your visualization with the benefits of react for updating the DOM. VX is the closest to the approaches you're learning in this book. React for rendering, D3 for calculations. When you build a set of custom components for your organization, a flavor of VX is what you often come up with. That's why I recommend teams use VX when they need to get started quickly. Here's what it takes to implement a Barchart using Nivo. You can try it on CodeSandbox const data = [{ quarter: 1, earnings: 13000 },{ quarter: 2, earnings: 16500 },{ quarter: 3, earnings: 14250 },{ quarter: 4, earnings: 19000 }];const App = ({ width = 400, height = 400 }) => {const xMax = width;const yMax = height - 120;const x = d => d.quarter;const y = d => d.earnings;// scalesconst xScale = scaleBand({rangeRound: [0, xMax],domain: data.map(:satisfied:,padding: 0.4});const yScale = scaleLinear({rangeRound: [yMax, 0],domain: [0, max(data, y)]});return (<div style={styles}><h1>VX basic demo</h1><svg width={width} height={height}>{data.map((d, i) => {const barHeight = yMax - yScale(y(d));return (<Barwidth={xScale.bandwidth()}height={barHeight}x={xScale(x(d))}y={yMax - barHeight}data={{ x: x(d), y: y(d) }}/>);})}</svg></div>);}; Move involved than previous examples, but means you have more control and fight the library less often. VX does the tedious stuff for you, so you can focus on the stuff that matters. This code creates value accessor methods, D3 scales, then iterates over an array of data and renders a <Bar> for each. The bar gets a bunch of props.
https://reactfordataviz.com/react-d3/2/
CC-MAIN-2022-40
refinedweb
924
57.16
02 June 2010 12:03 [Source: ICIS news] LONDON (ICIS news)--Unspecified problems with the production of base oils at Shell’s refinery at Stanlow, UK, have left customers scrambling to find alternative supplies in an already tight market, sources said on Wednesday. According to a company source, there have been “a few problems” at the refinery that have lasted “for a few days”. Shell would not officially comment on the extent or expected duration of the production problems. “We can’t seem to get anything out of Stanlow,” said a UK-based lubricants blender, referring to base oils. The Stanlow refinery has the capacity to produce 260,000 tonnes/year of Group I base oils. A base oils trader and distributor said: “We have loads of ?xml:namespace> The European market for base oils has tightened since the first quarter, partly because producers were exporting material to regions where prices were higher. European domestic prices for solvent-neutral 150 base oils rose from $810-835/tonne (€664-685/tonne) FOB (free on board) NWE (northwest Europe) in March to $940-980/tonne FOB NWE on 1 June, according to global chemical market intelligence service ICIS pricing. Producers are seeking further price increases for June deliveries to European customers. Shell is understood to be negotiating the sale of the entire Stanlow site as part of a plan to rationalise its European refining operations. ($1 = €0.82) For more on Shell
http://www.icis.com/Articles/2010/06/02/9363990/problems-at-shells-stanlow-refinery-tighten-uk-base-oils-market.html
CC-MAIN-2014-10
refinedweb
240
50.77
I am working on a project where we have to implement an algorithm that is proven in theory to be cache friendly. In simple terms, if N B O(N/B) B #include <iostream> using namespace std; struct node{ int l, r; }; int main(int argc, char* argv[]){ int n = 1000000; node* A = new node[n]; int i; for(i=0;i<n;i++){ A[i].l = 1; A[i].r = 4; } return 0; } 1000000/8 = 125000 -O3 perf stat -B -e cache-references,cache-misses ./cachetests Performance counter stats for './cachetests': 162,813 cache-references 142,247 cache-misses # 87.368 % of all cache refs 0.007163021 seconds time elapsed #include <iostream> #include <papi.h> using namespace std; struct node{ int l, r; }; void handle_error(int err){ std::cerr << "PAPI error: " << err << std::endl; } int main(int argc, char* argv[]){ int numEvents = 2; long long values[2]; int events[2] = {PAPI_L3_TCA,PAPI_L3_TCM}; if (PAPI_start_counters(events, numEvents) != PAPI_OK) handle_error(1); int n = 1000000; node* A = new node[n]; int i; for(i=0;i<n;i++){ A[i].l = 1; A[i].r = 4; } if ( PAPI_stop_counters(values, numEvents) != PAPI_OK) handle_error(1); cout<<"L3 accesses: "<<values[0]<<endl; cout<<"L3 misses: "<<values[1]<<endl; cout<<"L3 miss/access ratio: "<<(double)values[1]/values[0]<<endl; return 0; } L3 accesses: 3335 L3 misses: 848 L3 miss/access ratio: 0.254273 You can go through the source files of both perf and PAPI to find out to which performance counter they actually map these events, but it turns out they are the same (assuming Intel Core i here): Event 2E with umask 4F for references and 41 for misses. In the the Intel 64 and IA-32 Architectures Developer's Manual these events are described as: 2EH 4FH LONGEST_LAT_CACHE.REFERENCE This event counts requests originating from the core that reference a cache line in the last level cache. 2EH 41H LONGEST_LAT_CACHE.MISS This event counts each cache miss condition for references to the last level cache. That seems to be ok. So the problem is somewhere else. Here are my reproduced numbers, only that I increased the array length by a factor of 100. (I noticed large fluctuations in timing results otherwise and with length of 1,000,000 the array almost fits into your L3 cache still). main1 here is your first code example without PAPI and main2 your second one with PAPI. $ perf stat -e cache-references,cache-misses ./main1 Performance counter stats for './main1': 27.148.932 cache-references 22.233.713 cache-misses # 81,895 % of all cache refs 0,885166681 seconds time elapsed $ ./main2 L3 accesses: 7084911 L3 misses: 2750883 L3 miss/access ratio: 0.388273 These obviously don't match. Let's see where we actually count the LLC references. Here are the first few lines of perf report after perf record -e cache-references ./main1: 31,22% main1 [kernel] [k] 0xffffffff813fdd87 ▒ 16,79% main1 main1 [.] main ▒ 6,22% main1 [kernel] [k] 0xffffffff8182dd24 ▒ 5,72% main1 [kernel] [k] 0xffffffff811b541d ▒ 3,11% main1 [kernel] [k] 0xffffffff811947e9 ▒ 1,53% main1 [kernel] [k] 0xffffffff811b5454 ▒ 1,28% main1 [kernel] [k] 0xffffffff811b638a 1,24% main1 [kernel] [k] 0xffffffff811b6381 ▒ 1,20% main1 [kernel] [k] 0xffffffff811b5417 ▒ 1,20% main1 [kernel] [k] 0xffffffff811947c9 ▒ 1,07% main1 [kernel] [k] 0xffffffff811947ab ▒ 0,96% main1 [kernel] [k] 0xffffffff81194799 ▒ 0,87% main1 [kernel] [k] 0xffffffff811947dc So what you can see here is that actually only 16.79% of the cache references actually happen in user space, the rest are due to the kernel. And here lies the problem. Comparing this to the PAPI result is unfair, because PAPI by default only counts user space events. Perf however by default collects user and kernel space events. For perf we can easily reduce to user space collection only: $ perf stat -e cache-references:u,cache-misses:u ./main1 Performance counter stats for './main1': 7.170.190 cache-references:u 2.764.248 cache-misses:u # 38,552 % of all cache refs 0,658690600 seconds time elapsed These seem to match pretty well. Edit: Lets look a bit closer at what the kernel does, this time with debug symbols and cache misses instead of references: 59,64% main1 [kernel] [k] clear_page_c_e 23,25% main1 main1 [.] main 2,71% main1 [kernel] [k] compaction_alloc 2,70% main1 [kernel] [k] pageblock_pfn_to_page 2,38% main1 [kernel] [k] get_pfnblock_flags_mask 1,57% main1 [kernel] [k] _raw_spin_lock 1,23% main1 [kernel] [k] clear_huge_page 1,00% main1 [kernel] [k] get_page_from_freelist 0,89% main1 [kernel] [k] free_pages_prepare As we can see most cache misses actually happen in clear_page_c_e. This is called when a new page is accessed by our program. As explained in the comments new pages are zeroed by the kernel before allowing access, therefore the cache miss already happens here. This messes with your analysis, because a good part of the cache misses you expect happen in kernel space. However you can not guarantee under which exact circumstances the kernel actually accesses memory, so that might be deviations from the behavior expected by your code. To avoid this build an additional loop around your array-filling one. Only the first iteration of the inner loop incurs the kernel overhead. As soon as every page in the array was accessed, there should be no contribution left. Here is my result for 100 repetition of the outer loop: $ perf stat -e cache-references:u,cache-references:k,cache-misses:u,cache-misses:k ./main1 Performance counter stats for './main1': 1.327.599.357 cache-references:u 23.678.135 cache-references:k 1.242.836.730 cache-misses:u # 93,615 % of all cache refs 22.572.764 cache-misses:k # 95,332 % of all cache refs 38,286354681 seconds time elapsed The array length was 100,000,000 with 100 iterations and therefore you would have expected 1,250,000,000 cache misses by your analysis. This is pretty close now. The deviation is mostly from the first loop which is loaded to the cache by the kernel during page clearing. With PAPI a few extra warm-up loops can be inserted before the counter starts, and so the result fits the expectation even better: $ ./main2 L3 accesses: 1318699729 L3 misses: 1250684880 L3 miss/access ratio: 0.948423
https://codedump.io/share/mJAGIZ5Ezqpn/1/why-does-perf-and-papi-give-different-values-for-l3-cache-references-and-misses
CC-MAIN-2017-13
refinedweb
1,038
65.12
I have a data table that I am trying to filter based on the date in one column. I would like to filter the data based on the lastModified column having a date one year or older but even getting it filter on some hard coded date would be a good start. The data in in string format so I am trying to use the new Date() function to convert to date. var table = $('#database').DataTable( { fixedHeader: true, data: dataSet, columns: [ { data: "processName" }, { data: "processLob" }, { data: "processOwner"}, { data: "RiskReviewer"}, { data: "lastModified"}] } ); var filteredData = table .column( { data: "lastModified"} ) .data() .filter( function ( value, index ) { return new Date(value) < 2015-10-10 ? true : false; } ); First thing you are going to want to do is add "columnDefs" object for your date column and specify it's type as "date". DateTables has built in date parsing as long as you are following a well known format. ColumnType API Def If that doesn't get you there completely then you will want to define a render function for your data column on the new columnDef object you just created. There you can check the render type and return a "nice" value for display and raw data value (ideally a value of type Date) for everything else. Render API Defintion Also some general advice don't try to fight the library. It actually is extremely flexible and can handle a lot. So use the built in API functions where ever possible. Usually things go awry when people try to manipulate the table manually using JQuery. Under the covers the DataTables plugin maintains a ton of state that never makes it to the DOM. Basically if there is a function in the API for it, use it. EDIT: Adding answer to the original posters question even though he found another solution. One thing to keep in mind is that "filter" is intended only to give you back a filtered data set. It will not change the display in the grid. Instead you will want to use "search" on the "column()" API item to filter the displayed rows in the DataTable. There is a small problem with that however. The search method only accepts regular values not functions. So if you want to implement this you have supply a custom search function like so: // The $.fn.dataTable.ext.search array is shared amongst all DataTables and // all columns and search filters are evaluated in the order in which they // appear in // the array until a boolean value is returned. $.fn.dataTable.ext.search.unshift(function(settings, data, dataIndex) { // Using a negative value to get the column wraps around to the end of // the columns so "-1" will always be your last column. var dateColumn = $(this).column(-1); // We get the data index of the dateColumn and compare it to the index // for the column currently being searched. if(dateColumn.index() !== dataIndex) {' // Pretty sure this indicates to skip this search filter return null; } var columnSearchingBy = $(this).column(dataIndex); // Allows the data to be a string, milliseconds, UTC string format ..etc var columnCellData = new Date(data.lastModified); var valueToSearchBy = new Date(columnSearchingBy.search.value); // Ok this is one of the worst named methods in all of javascript. // Doesn't actually return a meaningful time. Instead it returns the a // numeric value for the number of milliseconds since ~ 1970 I think. // // Kind of like "ticks()" does in other languages except ticks are // measured differently. The search filter I am applying here is to // only show dates in the DataTable that have a lastModified after or // equal the column search. return (valueToSearchBy.getTime() >= columnCellData.getTime()); }); // So this should use our fancy new search function applied to our datetime // column. This will filter the displayed values in the DataTable and from // that just a small filter on the table to get all the data for the rows // that satisfy the search filter. var filteredData = table .column({ data: "lastModified"}) .search('2015-10-10') .draw(); Even though you found another way to go on this one maybe the above will help you out later on.
https://codedump.io/share/r2H2iGePM8l4/1/how-to-filter-jquery-data-table-by-date-column
CC-MAIN-2017-26
refinedweb
679
64.3
Closed Bug 331510 Opened 14 years ago Closed 14 years ago Add knowledge of subdomains to necko (create ns Effective TLDService) Categories (Core :: Networking, enhancement) Tracking () mozilla1.8.1 People (Reporter: darin.moz, Assigned: pamg.bugs) References Details Attachments (3 files, 5 obsolete files) Add knowledge of TLDs to necko To deal with bugs like bug 319643, bug 154496, bug 142179 and of course bug 263931 and bug 252342, it would help if Firefox had some local knowledge of TLDs. This is something we have avoided doing in the past because we did not want to hardcode knowledge of TLDs in the browser. However, with the advent of auto-update (and binary patching), it may make sense to ship the browser with a hard-coded list of TLDs. Of course, some users are not able to benefit from auto-update (readonly installations), so a solution along these lines is not quite perfect. It may be good enough, however. Target Milestone: --- → mozilla1.8.1 Yngve Pettersen has just published two internet drafts which try and solve this problem without having hard-coded TLD knowledge in the browser. If we were to unite behind one or both of these methods, might that be a better solution? The former details what they are doing now; the latter what they would like to do in future. The drafts are located at: Gerv I'm familiar with the former, but I'm too concerned with reliability and DNS performance to go that route. When I investigated that solution a couple years back there were quite a few ccTLDs that had DNS entries (e.g., "co.tv"). The globally managed TLD list is very interesting. We could implement that today, and have our products fetch the list from Mozilla. They could do so asynchronously, purely as an update mechanism. If we used the same format as described in that file, then it would be possible for Mozilla to setup a CNAME or a HTTP redirect in the future to cause browsers to fetch from the global location. Darin: there may be a few TLDs which have DNS entries, but if it's only a small minority, then a) Opera's solution is a great improvement, and b) once we both implement it, we could do a bit of evangelism. This solution does have the advantage of being fairly low maintenance. Still, IMO either would do, and both are better than doing our own thing. Gerv > Darin: there may be a few TLDs which have DNS entries, but if it's only a > small minority, then a) Opera's solution is a great improvement... Remember that DNS can be very slow, especially for non-existant domains. NXDOMAIN results from DNS queries are often not cached. This solution might work really well if you have the right kind of DNS cache between you and the internet, but it could also be extremely bad for some users. That's just my assertion based on experience, so I could be wrong. Gerv, what's wrong with the suggestion I made in comment #2 about having mozilla temporarily host the global TLD list? Nothing's wrong with your suggestion. If we use Opera's format in a way which makes it possible to migrate to a global list, we aren't "doing our own thing". Do you want to contact Yngve and chat about it? His email address is yngve@opera.com. Gerv That makes a lot of sense. As long as we write our internals to expect a file in the format described in Yngve's second proposal, exactly where the file or files files come from and when they're fetched (one monolithic file with updates, or individual files as a new TLD URI is encountered) should be straightforward to adjust. Summary: Add knowledge of TLDs to necko → Add knowledge of subdomains to necko One monolithic file makes most sense to me. One maintainer who takes email updates from community members and/or TLD operators will do a much better job than relying on 200+ overworked TLD-operator-employed peons. Yngve's examples are a bit abstract. Has anyone checked that his syntax covers all cases? For example, the Japanese arrangements are notoriously complex - see bug 252342, comment 31. Gerv As I read it, Yngve's proposal currently doesn't provide a single file with all TLDs in it. They're hosted either by individual TLD maintainers or at a single site, but in any case divided into files by TLD. With the second option, it would be simple to gather a single file with all the TLDs. With the first, there's no way to know what TLDs exist a priori, so we'd have to grab the files for a new TLD as we encountered it in a URI. (Of course, we could seed with the most common ones.) Anyway, yes, it looks to me like the syntax supports the complications of .jp, as long as someone who understands them well sets up the file properly. I wouldn't want to volunteer for that job... I wonder... is there an existing mini-language we can repurpose, rather than invent a new one? Gerv Yngve reports that his proposals are still highly preliminary, and that the final file format may end up changing significantly before the dust settles. Given that, I'd say we should go ahead and write our TLD/sub-domain service with some easily handled, hard-coded (but incrementally updateable) list of the domains we know about for now, and keep an eye on this standard for future incorporation. So, I'm thinking of simply pulling Jo's list at (see bug 319643) into a file. A source file that contains only an array of suffixes would be a good compromise between ease of implementation (since it's a temporary solution) and ease of maintenance (by people who don't have to read C++). Yes, let's go ahead with that. Gerv? We've had this problem for years - is there any reason it's suddenly become urgent to fix? I know Yngve's proposal is preliminary, but surely we can work with him to firm it up? is out of date (for example, it doesn't include .me.uk). What's our error behaviour going to be for errors of omission and commission? seems like a better list, and more well-maintained. If one person can get that far in a relatively short time, I think having one central list maintained by us makes most sense. The amount of data per TLD is so small, I think one file (rather than 200 files) is better. I also think it should be updated using an HTTP request every e.g. 30 days (like Yngve's proposal) rather than using Firefox update, which some distributors disable in favour of their own package management mechanisms. Gerv Gerv: The impetus is Places. It wants a solution that would allow it to provide favorable groupings in the history UI. Places currently has a very short hardcoded list that you would probably prefer not to look at ;-) I've also wanted to fix the cookie problem for a long time. I'm happy to shoot for a list that we download periodically from mozilla.org servers. Also, in parallel I want to do whatever I can to help Yngve get his proposal through. But, I don't think we can wait for that. It's partly a question of how long we think Yngve's proposal will take to gel, combined with how often TLDs change, how many people would be affected by those changes, and how many people have update disabled. If the proposal is likely to be done fairly soon (whatever that might mean), then spending a lot of time on our own solution wouldn't be worthwhile -- as you say, the problem's been around for years. The longer we predict that it'll take for a standardized set of files to be in place, or at least for their format to be finalized, the more it's worth doing now even though it might well need to be scrapped later. Personally, I'm inclined to be optimistic, expect that we'll have a real standard before too long, and consequently put in a fairly stop-gap solution now. But I may be suffering from idealism. Pam: Agreed, but shipping FF2 w/ a hard-coded list of subdomains might be considered harmful over time. Since we cannot auto-update everybody, we probably do have to build a system to download a fresh file periodically. This is not hard to do in the Mozilla environment. WHATWG's local storage APIs need this information (for roughly the same reasons cookies do), so it'd be great to at least define a Necko API for it soonish. (The WHATWG design doesn't require such a list in the way cookies does, but yes, it could be made better if it has access to such a list.) Is the solution being proposed here able to cope with domains like uk.com and dyndns.org, or would those still be susceptible to attack? If we address this I'd like us to do a thorough job and include such domains, although that would make the list less "formal". That would be one reason to keep the list under our control, even if we did use a common format. Pseudo-registrars like uk.com could apply to have their domains added to the list. Gerv Darin: Why can't we auto-update everybody? Because people turn the auto-update off? Think about it this way. Because of folk who disable auto-update, we write a new piece of code that background downloads this list. That code is now a new complex entity that needs to be maintained and bugfixed. And privacy folk will no doubt howl for the moon and want it turned off. Since right now we're "super buggy", what's the disadvantage of being only "partially buggy" for new tlds that get created since the user's last install? The consequences of having an out-of-date TLD list could hardly be worse than the consequences of not being up to date with security fixes. Correct me if I'm wrong, but I don't believe SeaMonkey supports auto-updates. beng, roc: you guys make compelling arguments. hmm... SeaMonkey may not support auto-updates but it could, and in any case roc's argument still holds that out of date users are at worse risk from the security updates they don't have than they are from a stale TLD list. They will eventually update. Note that such a list will never solve the cookie problem 100% as it also applies to a lesser extent to any DNS organization (e.g. hackedlab.school.edu setting a school.edu cookie to influence app at registration.school.edu). We need to implement Cookie2 and some way to expose the domain/path/etc information to web apps (such as turn document.cookie into an array of individual cookie objects, with a setter and toString to handle compatible uses; or a new document.cookies array if that's better). Using auto-updating files is sounding like the way to go, then. To allow local or individual modifications, we can look for a TLD list in the user's profile first, then fall back to a default file if none is found. Summarizing from the thread on moz.dev.platform, the planned file format is - One TLD or "TLD-like" subdomain is listed per line. - An asterisk * wildcard matches any valid sequence of characters, and may only appear as an entire level of a domain. - An exclamation mark ! at the start of a line indicates an exception to a previously encountered rule, where that subdomain should be used instead. - The last matching rule in the list is the one that will be used. - If no item in the list matches a given hostname, the level-1 domain is considered its TLD. - A line is only considered up to the first whitespace, leaving the rest available for comments. - Any line beginning with # is treated as a comment. For example: com # 1st-level domains are not needed in the list, but may be included for completeness uk *.uk be ac.be jp ac.jp ... *.hokkaido.jp # hosts in .hokkaido.jp can't set cookies below level 4... *.tokyo.jp ... !metro.tokyo.jp !pref.hokkaido.jp # ...except hosts in pref.hokkaido.jp, which can set level 3 ... !city.shizuoka.jp The primary points of difference between this and Yngve's proposal (see Comment #1) are that subdomains are required to be listed one per line here, that the TLDs must be explicit since they're all in one file, and that his wildcards of the form *1 would be expressed as *.*. Pam: a bit late, perhaps, to say so, but I like your proposal. It seems like the simplest thing that will do the job, but no simpler. Gerv Thanks, Gerv. It's not too late -- I got distracted by searchbox improvements for a2, so I haven't done more than barely start this, but it's still high on my list. Then it may be worth pointing out that Yngve is now probably changing to an XML-based format for flexibility. It may be worth the two of you syncing up again quickly - either one will persuade the other, or at least if you agree to differ you will do so understanding all the issues. Gerv In email, Yngve reports that although he has plans to switch to an XML-based format, they're presently stalled by author time, IETF organizational issues, and pending technical design questions. He doesn't have a new preliminary format, and he doesn't sound optimistic that anything new will be available in the near future. So I'm going to continue with the format described above, and trust that we can either switch our internal parser to whatever wider standard comes into existence in the indeterminate future or write a converter to produce the format we want. Sounds good to me. OK. :-) Gerv See for a description of the final expected file format and parsing behavior. Attachment #223520 - Flags: review?(darin) Comment on attachment 223520 [details] [diff] [review] First shot at it netwerk/dns/public/nsITLDService.idl shouldn't this use AUTF8String, given IDN? I think we are making a nomenclature mistake here. In every other context I've ever encountered it, "TLD" (Top Level Domain) means solely .uk, .com, .museum etc. - i.e. a single label. Here, we have overloaded it to mean "multi-label domain name which is permitted to have cookies set on it", which is something entirely different. Each dot-separated part of a hostname is known as a "level", and so "top level" should mean exactly that. I would suggest you remove all references to "TLD" from the names of the files, functions and interfaces. I'm not certain about what the best replacement terminology is (without using references specific to cookies), but perhaps "Privately Controlled Domain" is a good name for what you get when you strip a hostname down as far as you can without getting into those bits of the DNS which are available for public registration? getMinPrivatelyControlledLengthForHost()? This isn't very good - I'm sure we can think of better - but I think using "TLD" terminology is a mistake. Gerv. I'm fairly sure that's not what's meant, but I think the spec is very unclear on this point. Gerv I agree with comment #34. How about "Public Domain Suffix", abbreviated to PDS? The problem with Public Domain Suffix is that it's not the length of the Public part of the Domain that the API returns, it's the length of the public part plus the first private label - i.e. the bit you _can_ set a cookie for. Gerv How about simply "effective TLD". That makes clear that it's not a true TLD, but it's also a name that people will be able to understand without any additional explanation. "Public Domain Suffix" is subject to confusion with "Public-Domain Suffix", and most people would have to have it explained to know what it was. (In reply to comment #37) > The problem with Public Domain Suffix is that it's not the length of the Public > part of the Domain that the API returns, it's the length of the public part > plus the first private label - i.e. the bit you _can_ set a cookie for. It should be the length of the public part. If you give it "foo.com", it returns 3, for "com". If it's not doing that for some cases, that's a bug I need to fix. (In reply to comment #35) >. The ! has two simultaneous meanings: both "not" (*.hokkaido.jp matches any subdomain of hokkaido.jp, except that it does not match pref.hokkaido.jp) and "important" in the CSS sense (the *.hokkaido.jp rule is overridden by the pref.hokkaido.jp rule). I'll make the documentation clearer. I misread the spec; ignore my objection to the name Public Domain Suffix. Pam: I think "effective TLD" would still suffer from being confused with a TLD, which these things are not. The amount of explaining which is necessary would be the same or more than that required by "Public Domain Suffix". I do think we need to disentangle the idea of a Top Level Domain from the idea of a domain in which cookies can or cannot be set - even if it means renaming all the files, wiki pages and API calls. Sorry :-) So can we do better than Public Domain Suffix? How about Public Registration Suffix, i.e. the suffix above which public registrations are permitted? Or even just "Public Suffix"? Gerv. Gerv So how about the obvious compromise: Public-Level Domain, PLD? :-) I'd buy that :-) Gerv Taking a cue from the Summary of this bug, what about "Public Subdomain"? "Public Domain"? (In reply to comment #41) >. It's not really that ! has two meanings, as that you can view it in two different ways that completely overlap: every "not" rule is also an "important" override. If that's confusing, then forget about the "important" part. I've already changed the wiki to read "exception" instead of "important" throughout, and I'll be changing the source too. The fact that one rule is an exception to another rule is the, er, important part. The other wording was just a side effect of how my brain works; forget I ever mentioned it. :) It would be possible for the rules to cascade based on their position in the file, but it would be slower, and the flexibility it would permit is overkill. Even the complexity of .jp is covered by allowing individual exceptions to single-level-wildcard rules. Most of the bullet points in the wiki are an attempt to document the behavior thoroughly in case someone puts in an oddball file sometime and wonders what's going on. In practice, a real subdomain description file should be much more straightforward than that page might imply. I'd like to retain the concept of either "Level" or "Prefix"... A "subdomain", again, is usually thought to be a single level - e.g. "I own the subdomain gerv in the TLD .net". (I'm less sure about this one, though.) So, in descending order of preference: Public-Level Domain Public Registration Suffix Public Domain Public Subdomain ... Top Level Domain But to be honest, any of them would do except that last one. Gotta go into hospital now; I'm sure you'll cope without me :-) Gerv Comment on attachment 223520 [details] [diff] [review] First shot at it >Index: netwerk/dns/public/nsITLDService.idl >+interface nsITLDService : nsISupports ... >+ PRInt32 getHostTLDLength(in ACString aHostname); The ACString should be a AUTF8String instead since aHostname may be non-ASCII (aka UTF-8). I think it would be nice to define a getHostTLD helper function as well that returns the TLD as a string: AUTF8String getHostTLD(in AUTF8String aHostname); >Index: netwerk/dns/src/nsTLDService.cpp >+// For a complete description of the expected file format and parsing rules, see >+// Might be good to prefix this comment a bit to explain that this service reads its TLD rules from a file. >+ stopOK: If true, this node marks the end of a rule. >+ important: If true, this node marks the end of an important rule. It's a tad confusing that the comments below mention "stop" and "imp" instead of "stopOK" and "important". >+SubdomainNode mSubdomainTreeHead; The "m" prefix is generally reserved for member variables. For statics, we generally use a "s" prefix: static SubdomainNode sSubdomainTreeHead; That said, maybe this should be a member variable. As we discussed, you could then allocate the nsTLDService class statically as well. See nsThreadManager for an example of allocating an XPCOM singleton statically. Note the AddRef and Release implementations. I think it is also important to cleanup the hashtable on shutdown since our tools look for objects left at shutdown when looking for objects that may have leaked during a browser session. Also, when Gecko is embedded, it may be the case that NS_ShutdownXPCOM does not correspond to the app shutting down (although that is unlikely). >+// nsTLDService::Init >+// >+// Initializes the root subdomain node and loads the TLD file. >+nsresult >+nsTLDService::Init() >+{ >+ mSubdomainTreeHead.important = PR_FALSE; >+ mSubdomainTreeHead.stopOK = PR_FALSE; >+ mSubdomainTreeHead.children.Init(); What if Init fails? (i.e., out of memory) It is generally a good idea to catch those errors and return failure. >+ >+ nsresult rv = LoadTLDFile(); >+ return rv; nit: "return LoadTLDFile();" >+// nsTLDService::GetHostTLDLength >+// >+// The main external function: finds the length in bytes of the TLD for the >+// given hostname. >+NS_IMETHODIMP >+nsTLDService::GetHostTLDLength(const nsACString &aHostname, PRInt32 *TLDLength) >+{ >+ // Calcluate a default length: either the level-0 TLD, or the whole string >+ // length if no dots are present. >+ PRInt32 nameLength = FindEndOfName(aHostname); I think this can just be aHostname.Length() since you shouldn't need to worry about trailing whitespace in the given value. >+ if (dotLoc < 0) >+ defaultTLDLength = nameLength; >+ else >+ defaultTLDLength = nameLength - dotLoc - 1; nit: favor brackets when coding an else block >+PRInt32 >+FindEarlierDot(const nsACString &aHostname, PRUint32 aDotLoc) { >+ if (aDotLoc < 0) >+ return -1; >+ >+ // Searching for '.' one byte at a time is fine since UTF-8 is a superset of >+ // 7-bit ASCII. >+ const char *start; >+ PRUint32 endLoc = NS_CStringGetData(aHostname, &start) - 1; The APIs defined in nsXPCOMStrings.h are for use outside of Mozilla by extension authors and embedders (when MOZILLA_INTERNAL_API is not defined). For internal code such as this you should just avoid the functions declared in that header file (it's more efficient that way). The following is similar: nsACString::const_iterator iter; aHostname.BeginReading(iter); const char *start = iter.get(); PRUint32 endLoc = iter.size_forward(); (I think the string code should define a RFindChar method that you could use to avoid having to roll your own like this!) >+FindEndOfName(const nsACString &aHostname) { ... >+ PRUint32 length = NS_CStringGetData(aHostname, &start); Again, avoid NS_CStringGetData in internal code. >+SubdomainNode * >+FindMatchingChildNode(SubdomainNode *parent, const nsACString &aSubname, >+ PRBool aCreateNewNode) >+{ >+ nsCString name(aSubname); >+ PRBool important = PR_FALSE; >+ >+ // Is this node important? >+ if (StringBeginsWith(aSubname, NS_LITERAL_CSTRING("!"))) { Can you use the .First() method here? It gets angry if aSubname is empty, so watch out for that. Also, once you have your nsACString available as a nsCString (see "name" above), it is highly preferrable to use that directly instead of the nsACString type. nsCString's methods for accessing the string's data are mostly inlined and much faster to call. >+ important = PR_TRUE; >+ name = Substring(aSubname, 1, aSubname.Length()); You can also call: name.Cut(0, 1) >+ // Look for an exact match. >+ SubdomainNode *result = nsnull; >+ SubdomainNode *match; >+ if (parent->children.Get(name, &match)) { >+ result = match; >+ } >+ >+ // If the subname wasn't found, optionally create it. >+ else if (aCreateNewNode) { >+ SubdomainNode *node = new SubdomainNode; check for out of memory >+ node->children.Init(); >+ parent->children.Put(name, node); ditto >+// AddTLDEntry >+// >+// Adds the given domain name rule to the TLD tree. >+nsresult >+AddTLDEntry(const nsACString &aDomainName) { >+ SubdomainNode *node = &mSubdomainTreeHead; >+ >+ PRInt32 dotLoc = FindEndOfName(aDomainName); >+ while (dotLoc > 0) { >+ PRInt32 nextDotLoc = FindEarlierDot(aDomainName, dotLoc - 1); >+ const nsACString &subname = Substring(aDomainName, nextDotLoc + 1, >+ dotLoc - nextDotLoc - 1); change that to |const nsCSubstring &subname| as its methods will be more efficient to call than those on nsACString. And, then you could change FindMatchingChildNode to take a nsCSubstring instead too. >+ // Open the file as an input stream. >+ nsCOMPtr<nsIFileInputStream> fileStream = >+ do_CreateInstance(NS_LOCALFILEINPUTSTREAM_CONTRACTID, &rv); >+ NS_ENSURE_SUCCESS(rv, rv); >+ >+ // 0x01 == read-only mode >+ rv = fileStream->Init(tldFile, 0x01, -1, nsIFileInputStream::CLOSE_ON_EOF); >+ NS_ENSURE_SUCCESS(rv, rv); I recommend using NS_NewLocalFileInputStream() from nsNetUtil.h >+ nsCOMPtr<nsILineInputStream> lineStream = do_QueryInterface(fileStream, &rv); >+ NS_ENSURE_SUCCESS(rv, rv); >+ >+ nsCAutoString lineData; >+ PRBool moreData = PR_TRUE; >+ while (moreData) { >+ rv = lineStream->ReadLine(lineData, &moreData); >+ if (NS_SUCCEEDED(rv)) { >+ if (! lineData.IsEmpty() && >+ ! StringBeginsWith(lineData, NS_LITERAL_CSTRING("//"))) { I suggest moving the NS_LITERAL_CSTRING outside of the loop. Use the NAMED version like this: NS_NAMED_LITERAL_CSTRING(commentStart, "//"); That way you do a little less work inside the loop. >Index: netwerk/dns/src/nsTLDService.h ... >+ static nsTLDService* GetTLDService() This seems to be unused. Attachment #223520 - Flags: review?(darin) → review- Personally, I think I like "effective TLD" the best. With a little documentation, we can easily make clear the fact that this is not the actual TLD, as that is never going to have a dot in it. interface nsITLDService : nsISupports { /** * Get the effective top-level domain from the given hostname. The effective * TLD is the top-most domain under which domains may be registered. The * effective TLD may therefore contains dots. For example, the effective TLD * of "" is "co.uk" because the "uk" ccTLD does not permit the * registration of "bbc.uk" as a domain. Similarly, the effective TLD of * "" is "com". */ AUTF8String getEffectiveTLD(in AUTF8String hostname); /** * blah blah... */ PRUint32 getEffectiveTLDLength(in AUTF8String hostname); }; I think explaining "effective TLD" to someone is much easier than explaining "public-level domain" because the latter doesn't stick in the mind as easily. Moreover, PLD as an acronym is not very clear (nsIPLDService!?!) and reminds me of PLDHash! That's just my opinion. I'd like to know what others think. (In reply to comment #46) All done, except: > static SubdomainNode sSubdomainTreeHead; > > That said, maybe this should be a member variable. As we discussed, you > could then allocate the nsTLDService class statically as well. I can't think of a compelling reason to change it to a static object since it's already working this way, but I'll keep the idea in mind for future situations. > I recommend using NS_NewLocalFileInputStream() from nsNetUtil.h Done; but that returns an nsIInputStream instead of the nsIFileInputStream I was using before. What are the tradeoffs between the two? Also, please take a close look at the error handing. I tried to make reasonable decisions about when to abort on an error and when to attempt to recover, but I'm not sure I'm conforming to all the conventions. Attachment #223520 - Attachment is obsolete: true Attachment #224274 - Flags: review?(darin) Drop this in your "dist/bin/res/" directory for testing. This JS file can be used with xpcshell to test the Effective TLD Service. Also, I renamed it to "Effective TLD Service" and changed all the references to pseudo-TLDs to "effective TLD". It can change again, though. A few comments on with the caveat that I've looked at the wiki documentation but not at the patch. That document specifies a few things regarding case insensitivity that seem dangerously vague: "Rules are not case-sensitive" and "All matching is case-insensitive.". Is your definition of case-equivalence that of ASCII, a specific version of Unicode (ugly, but what IDN does), or the latest version of Unicode? It might be a good idea to say that all entries in the file are normalized according to RFC 3491, which specifies the combination of case folding to lowercase, NFKC Unicode normalization, and some other normalization. This might prevent people accidentially putting the wrong form in the file. For example, consider all 16 combinations of the following 4 strings, taking one as the end of the domain to be matched and the other as an entry in the file [this is in UTF-8]: ελλάδα.gr ελλάδα.gr ΕΛΛΆΔΑ.GR ΕΛΛΆΔΑ.GR (In reply to comment #52) > Is your definition of case-equivalence that of ASCII, a > specific version of Unicode (ugly, but what IDN does), or > the latest version of Unicode? I'd been wondering about that issue myself. It normalizes both the file entries and the incoming hostname, currently using ToLowerCase() from nsReadableUtils.cpp. But now that I know what to search for, it looks like it should use nsIDNService::stringPrep instead. I'll switch to that and clarify the docs. Following further discussions with Darin, I'm removing all the additional convenience functions from the interface, dropping back to the original getEffectiveTLDLength() only. Others can be added back in if someone actually needs them. In addition, the user's rule file is appended to the system file, rather than replacing it; and rules and hostnames are normalized according to RFC 3454 using the IDN service. I'd still appreciate a close look at the error-handling. Attachment #224274 - Attachment is obsolete: true Attachment #224730 - Flags: review?(darin) Attachment #224274 - Flags: review?(darin) Includes non-ASCII rules Attachment #224275 - Attachment is obsolete: true (UTF-8): tests non-ASCII hostnames Attachment #224276 - Attachment is obsolete: true Comment on attachment 224730 [details] [diff] [review] Patch addressing Darin's and David's comments >Index: netwerk/dns/public/nsIEffectiveTLDService.idl >+interface nsIURI; nit: this forward decl can go away >+ * The hostname will be normalized using nsIDNService::Normalize, which nit: nsIIDNService::normalize ... two "I"'s :) >+ * follows RFC 3454. getEffectiveTLDLength() will fail, generating an >+ * error, if the hostname contains characters that are invalid in URIs. nit: Perhaps this error should be documented using a @throws section. >+ PRInt32 getEffectiveTLDLength(in AUTF8String aHostname); nit: perhaps PRUint32 is better since the length is never negative. >Index: netwerk/dns/src/nsEffectiveTLDService.cpp >+nsEffectiveTLDService* nsEffectiveTLDService::sInstance = nsnull; >+SubdomainNode sSubdomainTreeHead; >+ >+// Semaphore to prevent using tree before it is fully loaded. >+SubdomainSemaphore sSubdomainTreeLoadState = kSubdomainTreeNotLoaded; I think that sSubdomainTreeHead and sSubdomainTreeLoadState both should be declared static. >+void LOG_EFF_TLD_TREE(const char *msg = "", >+ SubdomainNode *head = &sSubdomainTreeHead) >+{ >+#if defined(EFF_TLD_SERVICE_DEBUG) && EFF_TLD_SERVICE_DEBUG >+ if (msg && msg != "") >+ printf("%s\n", msg); >+ >+ PRInt32 level = 0; >+ head->children.EnumerateRead(LOG_EFF_TLD_NODE, &level); >+#endif // EFF_TLD_SERVICE_DEBUG >+ >+ return; >+} nit: kill unnecessary return statement? >+nsresult >+nsEffectiveTLDService::Init() >+{ >+ // Simple lock while file is loading. >+ if (sSubdomainTreeLoadState == kSubdomainTreeLoading) >+ return NS_ERROR_NOT_INITIALIZED; >+ >+ if (sSubdomainTreeLoadState == kSubdomainTreeLoaded) >+ return NS_OK; >+ >+ sSubdomainTreeLoadState = kSubdomainTreeLoading; >+ >+ sSubdomainTreeHead.exception = PR_FALSE; >+ sSubdomainTreeHead.stopOK = PR_FALSE; >+ nsresult rv = NS_OK; >+ if (!sSubdomainTreeHead.children.Init()) >+ rv = NS_ERROR_OUT_OF_MEMORY; >+ >+ if (NS_SUCCEEDED(rv)) >+ rv = LoadEffectiveTLDFiles(); >+ >+ if (NS_SUCCEEDED(rv)) { >+ sSubdomainTreeLoadState = kSubdomainTreeLoaded; >+ } >+ else { >+ // In the event of an error, clear any partially constructed tree and reset >+ // the semaphore. >+ EmptyEffectiveTLDTree(); >+ } I'm confused by the use of this lock flag. How can nsEffectiveTLDService::Init be called when sSubdomainTreeLoadState is equal to kSubdomainTreeLoading? Also, given that Init is called by your XPCOM factory constructor, it should only ever be called once. Since this object is not threadsafe (non-threadsafe addref and release), I'm left being confused by the locking. >+void >+EmptyEffectiveTLDTree() >+{ >+ if (sSubdomainTreeHead.children.IsInitialized()) { >+ sSubdomainTreeHead.children.EnumerateRead(DeleteSubdomainNode, nsnull); >+ sSubdomainTreeHead.children.Clear(); >+ } >+ sSubdomainTreeLoadState = kSubdomainTreeNotLoaded; >+ >+ return; >+} nit: return statement unnecessary, but now i see a pattern... if you are consistent with the usage, then no prob. i just prefer less to read ;-) >+nsresult NormalizeHostname(nsACString &aHostname) I recommend changing this to nsCString instead because it will be slightly more efficient. You will then need to change AddEffectiveTLDEntry to also take a nsCString ref, but that should be do-able. The IsASCII and ToLowerCase operations will be slightly more efficient if you work with nsCString instead of nsACString. >+{ >+ nsACString::const_iterator iter; >+ aString.BeginReading(iter); >+ *start = iter.get(); >+ *length = aString.Length(); Use iter.size_forward() here instead of aString.Length() because it will be more efficient (inline accessor instead of a function call). >+SubdomainNode * >+FindMatchingChildNode(SubdomainNode *parent, const nsCSubstring &aSubname, >+ PRBool aCreateNewNode) >+{ >+ // Make a mutable copy of the name in case we need to strip ! from the start. >+ nsCString name(aSubname); I recommend using nsCAutoString here instead so that you leverage the stack for string buffer storage. >+ if (NS_SUCCEEDED(rv)) { >+ rv = effTLDFile->AppendNative(EFF_TLD_FILENAME); >+ NS_ENSURE_SUCCESS(rv, rv); >+ >+ rv = effTLDFile->Exists(&exists); >+ NS_ENSURE_SUCCESS(rv, rv); >+ } >+ } >+ else { ... >+ rv = effTLDFile->AppendNative(EFF_TLD_FILENAME); >+ NS_ENSURE_SUCCESS(rv, rv); >+ >+ rv = effTLDFile->Exists(&exists); >+ NS_ENSURE_SUCCESS(rv, rv); >+ } It looks like this code could be factored out slightly and then shared. Might be good for saving on codesize. Every little bit helps! :-) r=darin w/ nits picked Attachment #224730 - Flags: review?(darin) → review+ I'd like to continue to work on , using the rulesets defined here. However, I won't have time until the end of July, because I'll be in the Colombian jungle until then. Note: I'm working for a big telecommunications company, and my bosses are also interested in this work, especially Yngve's draft. Maybe it will be used in VoIP too, someday. (In reply to comment #57) All nits picked, except: > nit: return statement unnecessary, but now i see a pattern... if > you are consistent with the usage, then no prob. i just prefer > less to read ;-) A |return;| at the end of a void function is like a period at the end of a sentence. :) I prefer to make it visually and conceptually clear that the function was intended to end there. > >+nsresult NormalizeHostname(nsACString &aHostname) > > I recommend changing this to nsCString instead because it will be > slightly more efficient. Done. Should the string guide ( and the table at the end of that page) be changed? It currently says, "It is recommended that you use the most abstract interface possible as a function parameter, instead of using concrete classes." Attachment #224730 - Attachment is obsolete: true (In reply to comment #58) > I'd like to continue to work on , using the > rulesets defined here. However, I won't have time until the end of July, > because I'll be in the Colombian jungle until then. I'm really glad you're taking that on! If someone needs a sample file larger than the one attached here before then, I can put one together. Status: NEW → RESOLVED Closed: 14 years ago Resolution: --- → FIXED (In reply to comment #58) > I'd like to continue to work on , using the > rulesets defined here. Bug 342314 now tracks creating a rule list for this service. I don't really believe the blacklist approach would work. The problem is that it will only work for "official" TLDs, not unofficial ones. I think unofficial pseudo-TLDs (like .com.be) should also be supported. Websites registered in those domains also have a right for security. And it would be quite hard to define a border for this. What about things like .geocities.com? I don't know a lot about this, but when using `dig` (dns lookup) and looking up some domains, it seems that the ;;Authority Section always got the exact right domain for which I would limit cookies etc. What's wrong with this? When using my normal dns server (a cheap dsl router), I get no authoritiy section at all. When using the dns server of my isp directly, i get the list of root servers as authority section. A very limited test, but show that you can't really trust the use of that section. It might not be available, or it might not work. Flags: blocking1.8.1? Not a blocker, but if something is well baked and ready for release, please request an approval on 181. Flags: blocking1.8.1? → blocking1.8.1- This new interface has been on the trunk for quite a while, but it hasn't had any data file, and nothing actually uses it. It has therefore had little to no testing apart from what I did in development, as far as I know. I'm not sure what benefit would be had from landing it on the branch, but it should likewise be little harm apart from code size. Please reconsider this for FF2, the globalStorage feature really needs this data. Flags: blocking1.8.1- → blocking1.8.1? Comment on attachment 225038 [details] [diff] [review] Patch as checked in dveditz, is globalStorage going to have time to start using this in FF2? Since there's a need, I guess there's no better way to get more testing than to bake in beta2 for a while. Requesting approval. Given that there's no data file yet, what's the point? Is that expected to appear soon? (In reply to comment #68) > Given that there's no data file yet, what's the point? Is that expected to > appear soon? Yes, work on that is progressing in bug 342314. OK, so we'll consider the patch if there's a data file, but it's not a blocker. Flags: blocking1.8.1? → blocking1.8.1- Even the preliminary data file currently posted for bug 342314 would be a significant improvement (once the bad non-ASCII characters are fixed). Any domains not listed in it simply fall back to the current behavior. The patch here has been baking since June. Plusing to keep on the radar if we can get a working data file and a patch together with low enough risk for FF2 Flags: blocking1.8.1- → blocking1.8.1+ Comment on attachment 225038 [details] [diff] [review] Patch as checked in Clearing the approval request as per previous driver comments; when there's a working data file that's been tested on the trunk, please renom! Also, it seems like it's actually better not to take it if we don't have a data file, since if we ship in a state that has the code but not the data file, it might be harder for an extension that wants to use the code if it's present to do the detection of whether it's really there... Talking to DVeditz today this is not wired up to anything on the 18branch and we don't have a patch ready to hookup to cookies. So removing this from the 18 list... Flags: blocking1.8.1+ → blocking1.8.1- Summary: Add knowledge of subdomains to necko → Add knowledge of subdomains to necko (create nsEffectiveTLDService) So this interface isn't really usable from JavaScript. At least not if you want to use it with non-ASCII tlds... See bug 368989. Flags: wanted1.8.1.x+
https://bugzilla.mozilla.org/show_bug.cgi?id=331510
CC-MAIN-2020-10
refinedweb
6,419
64.41
I need to allocate contiguous space for a 3D array. (EDIT:) I GUESS I SHOULD HAVE MADE THIS CLEAR IN THE FIRST PLACE but in the actual production code, I will not know the dimensions of the array until run time. I provided them as constants in my toy code below just to keep things simple. I know the potential problems of insisting on contiguous space, but I just have to have it. I have seen how to do this for a 2D array, but apparently I don't understand how to extend the pattern to 3D. When I call the function to free up the memory, free_3d_arr lowest lvl mid lvl a.out(2248,0x7fff72d37000) malloc: *** error for object 0x7fab1a403310: pointer being freed was not allocated #include <stdio.h> #include <stdlib.h> int ***calloc_3d_arr(int sizes[3]){ int ***a; int i,j; a = calloc(sizes[0],sizeof(int**)); a[0] = calloc(sizes[0]*sizes[1],sizeof(int*)); a[0][0] = calloc(sizes[0]*sizes[1]*sizes[2],sizeof(int)); for (j=0; j<sizes[0]; j++) { a[j] = (int**)(a[0][0]+sizes[1]*sizes[2]*j); for (i=0; i<sizes[1]; i++) { a[j][i] = (int*)(a[j]) + sizes[2]*i; } } return a; } void free_3d_arr(int ***arr) { printf("lowest lvl\n"); free(arr[0][0]); printf("mid lvl\n"); free(arr[0]); // <--- This is a problem line, apparently. printf("highest lvl\n"); free(arr); } int main() { int ***a; int sz[] = {5,4,3}; int i,j,k; a = calloc_3d_arr(sz); // do stuff with a free_3d_arr(a); } Since you are using C, I would suggest that you use real multidimensional arrays: int (*a)[sz[1]][sz[2]] = calloc(sz[0]*sizeof(*a)); This allocates contiguous storage for your 3D array. Note that the sizes can be dynamic since C99. You access this array exactly as you would with your pointer arrays: for(int i = 0; i < sz[0]; i++) { for(int j = 0; j < sz[1]; j++) { for(int k = 0; k < sz[2]; k++) { a[i][j][k] = 42; } } } However, there are no pointer arrays under the hood, the indexing is done by the magic of pointer arithmetic and array-pointer-decay. And since a single calloc() was used to allocate the thing, a single free() suffices to get rid of it: free(a); //that's it.
https://codedump.io/share/oOxDnrswRJfl/1/allocating-contiguous-memory-for-a-3d-array-in-c
CC-MAIN-2017-13
refinedweb
390
56.59
Python Programming Training Classes in Cambridge, Cambridge, Massachusetts and surrounding areas via our hands-on, expert led courses. All of our classes either are offered on an onsite, online or public instructor led basis. Here is a list of our current Python Programming related training offerings in Cambridge, - Agile Development with Scrum 7 March, 2022 - 8 March, 2022 - Intermediate Python 3.x 28 March, 2022 - 31 March, 2022 - ASP.NET Core MVC - VS2019/Core 5.0 19 January, 2022 - 20 January, 2022 - Advanced C++ Programming 24 January, 2022 - 28 January,? Checking to see if a file exists is a two step process in Python. Simply import the module shown below and invoke the isfile function: import os.path os.path.isfile(fname)…
https://hartmannsoftware.com/Training/Python/Cambridge-Massachusetts
CC-MAIN-2022-05
refinedweb
122
57.16
RationalWiki:Saloon bar/Archive194 Contents - 1 Republicans and technology - 2 User page spam - 3 Fizxing keyboasrd? - 4 Is it just me? - 5 Is this worth an article/is this covered here and I just can't find it? - 6 Another take on why the Pope resigned. - 7 Should victims of rape be immune from underage alcohol crimes if they report having been raped? - 8 Worst Jury Ever? - 9 RW Recommends - 10 Am I an old fart? - 11 Suggestion for the main page/mission statement. - 12 Random chemistry question - 13 How to sell horrible ideas - 14 Not a good sign... - 15 Copyright Alerts System - 16 The Seven Deadly Sins around the US - 17 Test blog for layout - 18 Glenn Beck derides wrestling - 19 Why I hate liquid threads - 20 Bush more regulatory than Nixon - 21 Slow as balls. - 22 This theologian is quite sophisticated. - 23 Cardinal Keith O'Brien &Sexual hypocrisy - 24 This should motivate you to encourage girls to be more masculine - 25 Traveling to New Haven this Saturday - 26 astorehouse of knowledge server error 500 ? - 27 A really weird question - 28 Berlusconi screws Europe over - 29 Win an Oscar, file for bankruptcy - 30 BOC's new Patriarch and other fun stuff at home - 31 Why I have lost almost all respect for The Onion - 32 Wiki Love - 33 All by my Goat - 34 New Conservapaedia, what's it doing? - 35 Soledad - 36 March Madness... Vatican Style! - 37 ??? - 38 Politically confused - 39 mw:Extension:Widgets - 40 The death knell of the F-35? - 41 So. - 42 Need Help Identifying Jacket Republicans and technology[edit] Not a WIGO, but this NY Times op-ed is worth a read: Can the Republicans Be Saved From Obsolescence? VOXHUMANA 02:44, 21 February 2013 (UTC) - I thought of putting that on blogs, but couldn't think of a good tag line - if anyone can, they should - David Gerard (talk) 09:56, 21 February 2013 (UTC) - "The NY Times nicely illustrates Betteridge's law of headlines." Too obscure? --Yukabacera (talk) 12:43, 21 February 2013 (UTC) User page spam[edit] What, if any, are the guidelines on such as this which is merely a link to a commercial psoriasis "treatment" site (in Polish)? My first instinct was to vape it, but it is a user page. Scream!! (talk) 12:47, 21 February 2013 (UTC) - User pages can be vaped if they are just spam, I think. It's been done many times. SophieWilder 12:52, 21 February 2013 (UTC) - Recent example. SophieWilder 12:55, 21 February 2013 (UTC) Fizxing keyboasrd?[edit] Ezxzzcuase me for as moment while I try to rectify this... ...and fuck me sideways with a rake, it's gone RIGHT as I came to the end of that line. For the past few months, perhaps stretching back as far as October or November, my notebook's keyboard has had this weird tick. Around once or twice a week, or once a fortnight if I'm lucky, the far left of my keyboard will...kind of stick together. The keys don't go down at the same time or anything, but if I press 1 or 2, 12 will appear, if I press q or w, then qw will appear, repeat for a, s, z and x, and sometimes it can come out worse (see 'Ezxzzcuase'), so I'll have to plan ahead, go back and delete letters, or in cases where I'm inputting a password, either be very attentive and lucky or resort to the onscreen keyboard. I planned to come here and explain this after whacking the affected letters after 'rectify this' until it came right, and then it happened. I'm not kidding when I say it has a habit of going away when I start to complain via typing to someone else. So, I'm wondering what I should do if I want to fix it for good, considering my spacebar and a couple other keys (such as b) are a bit rigid and I want to get those looked at, not to mention the dandruff that's built in there. Do most notebooks let you remove the entire keyboard at once to let you give it a brush or hoover? Polite Timesplitter come shout at me for being thick 14:10, 21 February 2013 (UTC) - There is an excellent chance that the symptoms you've described are electrical rather than mechanical and thus it's possible that you can't really fix them by cleaning it, but it's worth a shot. The short key travel normally used on portable hardware means the keyboard is far trickier to take to pieces than an old full-size PC keyboard, there is definitely a risk of snapping tiny yet vital pieces of plastic if force is used. So be very careful. - Still, it is very common for the keyboard itself to be a removeable and replaceable component of a laptop type device. Check the underside, if there are ordinary-ish screw heads visible which are labelled with a keyboard icon or similar that certainly means that, if you unscrew all such screws, you will be able to remove the keyboard making cleaning easier. Usual caveats about disassembling things apply (unplug from the wall, remove battery if possible, work on a clean, flat table or similar, do not apply excess force, etc.). For more common devices replacement keyboards are available from ebay or similar sources, particularly if your keyboard type is popular or you can live with the "wrong" keyboard, e.g. many Brits can put up with a US keyboard. So that's an option if you can't "fix" it. Tialaramex (talk) 15:01, 21 February 2013 (UTC) - Odds are you got something sticky under there or the thin film that gets pressed into contact when you depress your key is cheesed. Laptop keyboards aren't consumer serviceable. Some brands are notoriously shitty. Like he said, you can try popping the keys off, but it's an absolute guarantee you'll break something and realize you still can't get at the problem. Replacement keyboards are cheap and easy to install. Don't waste your time doing anything else. When people ask technical questions it helps to give specifics. Like, wouldn't it make you happy if someone could give you a link to the $20 replacement part that takes 5 minutes to install? 15:37, 21 February 2013 (UTC) - Keyboards are usually cheap ($15-$40) and easy (5-15 min) to replace. Occasionaluse (talk) 15:46, 21 February 2013 (UTC) - What they said. I was able to buy a new keyboard on Amazon, since mine was a very popular model. I'm also a total klutz yet was able to change it on my own, so no sweat. --TheLateGatsby (talk) 15:51, 21 February 2013 (UTC) - Spilling sticky drinks over your keyboard is known to produce this effect. While laptop keyboards are not really user serviceable it's usually not a big deal to replace them. Replacements for common models are easily found on Fleabay and a Goggle search will probably turn up a guide on how to open your laptop case, unless it's one of those impossible-to-get-into poseur models. Your only problem might be getting an exact replacement for the key layout. You can either remap your keyboard or pop off some of the old keys and put them on your new one. ГенгисIs the Pope a Catholic? 19:00, 21 February 2013 (UTC) - If it's just sticky stuff, you can also try removing the keyboard and all electronics and ribbon cables attached to it and running it through the dishwasher on a short cycle with no detergent. Seriously. It can't make things any worse for a useless keyboard and it sometimes works. Let it dry completely away from sunlight or extra heat before you reinstall it. Obviously don't try this on keyboards that light up; they have inaccessible circuits under each key for the LEDs. 19:09, 21 February 2013 (UTC) - If it's still under warranty, you can send it in and have the keyboard replaced. Nebuchadnezzar (talk) 19:55, 21 February 2013 (UTC) Is it just me?[edit] Am I the only one who thinks uttering the phrase "Obama Phone" immediately puts someone in the idiot column, indicating that no further conversation is necessary? SirChuckBA product of Affirmative Action 17:47, 21 February 2013 (UTC) - I wonder who owns this website. It doesn't have the racially charged tone of a conservative welfare parody, and it says it's independent and not affiliated with any government.--"Shut up, Brx." 17:59, 21 February 2013 (UTC) - Well, describing any government program as "Obama[whatever]" may not mean one is an idiot, but it does mean they have been pretty well rooked by the Ministry of Truth. Both "Obamacare" and "Obamaphones" are Republican ideas. --TheLateGatsby (talk) 18:03, 21 February 2013 (UTC) - There are a number of phrases that indicate that I have no real need to engage with the person who has uttered them: "Obama Phone," "Death Panel," "Birth Certificate," "FEMA Camp," "NAFTA Highway," etc, etc, etc. Nailed a retread to my feet and prayed for better weather. 18:05, 21 February 2013 (UTC) - Yes, but Obama was very clever to re-appropriate the term "Obamacare." Nailed a retread to my feet and prayed for better weather. 18:06, 21 February 2013 (UTC) - (Edit Conflict) Not sure who owns it Brx. It was registered via domains by proxy. and Gatsby, anyone using the term Obamaphone isn't just linking the program to the government, they're specifically impugning Obama for handing out free phones to poor people. SirChuckBBoom Goes the Dynamite 18:07, 21 February 2013 (UTC) - To clarify, when I say "Ministry of Truth", I mean Fox News. --TheLateGatsby (talk) 18:11, 21 February 2013 (UTC) - Their memo headers say "MiniTrue," actually. Radioactive afikomen Please ignore all my awful pre-2014 comments. 18:15, 21 February 2013 (UTC) - Anytime I hear someone use the term I want to assume a few things about them, but mostly that they're either stupid, sloppy, or liars. In reality these are RepublicanCongressPhones from a program started in the mid-90's under the first majority Republican Congress in 40 years. But that's irrelevant to people eager to smear the Coon in Chief. They're part of a program that gives cheap phones with limited functionality to people below the federal poverty guidelines who can't afford copper wire phones in their homes. It's not about giving cell phones to "welfare queens" or whatever other racist names bigots have for people they don't like. These phones are sometimes a lifeline for poor people, hence one of the programs being called "Lifeline." There's a bill presently pending in the House called the Stop Taxpayer Funded Cell Phones Act of 2011, sponsored by 20 representatives. Predictably, every single one of them is a Republican. Nice. 19:25, 21 February 2013 (UTC) - I would add that I find it difficult to understand why programs like these and broad consumer protection wouldn't be broadly supported on both sides of the aisle. Many who fall into the category of being considered "low income" and who also disproportionately need assistance with problems arising from unfair, one-sided consumer financial transactions are people on fixed incomes, often solely social security - e.g. these guy's parents... 19:31, 21 February 2013 (UTC) - To quote Michael Moore's Sicko, "They all loved their mothers. It's just that they didn't love our mothers as much." Radioactive afikomen Please ignore all my awful pre-2014 comments. 19:48, 21 February 2013 (UTC) - The "Obamaphone" page linked above has this, which seems pretty legit. It even mentions that the program was started by Reagan. But then I've never heard this phrase outside of this discussion.--Token Conservative (talk) 20:51, 21 February 2013 (UTC) - Lucky! --TheLateGatsby (talk) 21:53, 21 February 2013 (UTC) - Ring ring ring ring ring... Obama Phone? Nebuchadnezzar (talk) 04:27, 22 February 2013 (UTC) Is this worth an article/is this covered here and I just can't find it?[edit] "Free Distant Qigong Energy Healing." Nailed a retread to my feet and prayed for better weather. 02:56, 22 February 2013 (UTC) - I'm sure there's some article here about general moronic ideas.--Token Conservative (talk) 03:03, 22 February 2013 (UTC) Another take on why the Pope resigned.[edit] So there was that dumb rumor hitting my Facebook last week about the (not-really-existent) "International Tribunal for Crimes Against Church and State" getting the Pope indicted; more than a few smart and educated people I know fell for that, mostly because decent humans want to see the Pope behind bars. But thsi looks at least moderately more interesting. Nailed a retread to my feet and prayed for better weather. 04:47, 22 February 2013 (UTC) - I still prefer my theory--Token Conservative (talk) 04:51, 22 February 2013 (UTC) Should victims of rape be immune from underage alcohol crimes if they report having been raped?[edit] I think that this would make rape much more likely to be reported, and would reduce victimization. I think it's absolutely ridiculous that girls who are raped end up getting charged with crimes. Of course, the rule of law fascists would go ape over this. The law is designed to protect people, and people are not being protected if they are not reporting rape (a severe crime) because they will be charged with a victimless crime if they do. Rational formation and irrational application. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 21:31, 19 February 2013 (UTC) - Perhaps you could not always talk about rape here?--MikallakiM 21:47, 19 February 2013 (UTC) - Do "girls who are raped end up getting charged with [underage alcohol] crimes"? Without citations you sound rather like you're concern trolling. Peter mqzp 21:51, 19 February 2013 (UTC) - What these guys said. Wėąṣėḷőįď Methinks it is a Weasel 21:53, 19 February 2013 (UTC) - How am I concern trolling? I am speaking out against an absolutely disgusting form of blaming and punishing the victim who has already suffered to an unimaginable extent. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 21:55, 19 February 2013 (UTC) - (ECx2)I doubt this is really a significant concern. Besides, can't we keep all this rape talk to where it started so the rest of us can ignore it. DamoHi 21:56, 19 February 2013 (UTC) - I don't know how often it's a problem, but it is appalling that it happens. I was hoping to propose something that people would agree with instead of arguing, but if you don't want to discuss it, that's fine. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 22:01, 19 February 2013 (UTC) Concern trolling. PD's look the other way at minor crimes by victims. Hipocrite (talk) 22:00, 19 February 2013 (UTC) - Maybe sometimes they do, but not always. Stop your bullying. It's disgusting that you would attack someone standing up for women for your own little thing. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 22:04, 19 February 2013 (UTC) - Edit Conflict, twice: What "underage alcohol crimes" are you talking about? Where I did my underage drinking, it was a misdemeanor for a minor to be in possession of alcohol, but it wasn't a "crime" to have alcohol inside of you. Unless you showed up at the cop shop to report a rape while taking slugs from a a bottle of Wild Turkey, I'm not sure there would be any "crime" to waive in the first place. Nailed a retread to my feet and prayed for better weather. 22:02, 19 February 2013 (UTC) - Ok, an offense, whatever the technical word for it is. Sorry I don't know all the legal terms. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 22:05, 19 February 2013 (UTC) - So, does this happen often in your life? Women getting arrested for alcohol infractions when they report that they've been assaulted? Nailed a retread to my feet and prayed for better weather. 22:09, 19 February 2013 (UTC) - It's never been an issue for me personally, but I've read about it happening. I'm sure you would agree that it shouldn't happen at all. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 22:11, 19 February 2013 (UTC) - Okay, let's have links to those articles you read. rpeh •T•C•E• 22:13, 19 February 2013 (UTC) - This was a long time ago. I'm sorry, but most people don't bookmark every page they've ever read. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 22:25, 19 February 2013 (UTC) - Well Google a phrase that you remember, and in the future, bring things up closer to their incidence so that they're still relevant. Please. rpeh •T•C•E• 22:30, 19 February 2013 (UTC) - guess now we have even more reasons we lie about being raped, huh? Godot Chúc mừng năm mới 17:35, 20 February 2013 (UTC) - You think that will make people lie about being raped to get out of trouble? Even if it does, it's still an important policy because it's far better for victims to be able to report rape without worrying about getting in trouble as a result of it. Even if there isn't a conviction of the rapist, it's still important not to take action against the victim. While MRAs might cry about girls being able to get out of underage drinking penalties, it's still important. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 18:11, 20 February 2013 (UTC) - There have been significant studies, boht of campus life, and life in general that show the number of reported rapes that are fake are almost exactly identical to wrongful claims in any other field about 3%. 3% of reported murders were actually done by the person reporting the murder. 3% of breakins were done by the homeowners for insurance or to cover other crime. 3% of kidnappings are done by the parents to cover some other crime or get monney from insurance, etc. It happens. it doesn't happen often. even with the dangling of a 'get out of jail free" card. which do you think is a worse stigmga. getting a ticket for underage drinking? or tellign the world you were raped? Godot Chúc mừng năm mới 19:20, 20 February 2013 (UTC) - Ideally, there would be no stigma from rape. No one would look favorably on girls lying about rape to get out of an underage drinking offense. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 03:49, 21 February 2013 (UTC) - Thank you, Captain Obvious. Nailed a retread to my feet and prayed for better weather. 03:55, 21 February 2013 (UTC) I think the problem here would be for people under 18. Over 18, underage drinking is one of the few things that is punished less severely. We could have a requirement that the police would not be allowed to inform the victim's parents, although I'm sure the big government types would rip bookshelves off their walls and scream about how the parents have a "right" to know. If they actually cared about their children, they'd rather have them report having been raped than have to worry about whether or not they'll get into trouble. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 18:35, 22 February 2013 (UTC) Worst Jury Ever?[edit] Without commenting on the trial itself, since there will now be a retrial, I'd just like to say that the jury in the Chris Huhne / Vicky Pryce trial must be a candidate for Worst Ever. The list of ten questions the jury asked the judge is unbelievable: "Can you define what is reasonable doubt?" is pretty dumb (and you have to love the judge's reply!) but "Can a juror come to a verdict based on a reason that was not presented in court and has no facts or evidence to support it?" is downright idiotic. Number 10 is also a goodie. For non-Brits, there's a good background to the whole case here. rpeh •T•C•E• 18:05, 20 February 2013 (UTC) - There are some things you need to understand about jurys and jury trials. "jury instructions", a list of things the judges give to the juries to explain their roll there, are often vague, intentionally leading, and open, and have tons of things that the average person says "huh" to. They are agreed upon by both sides, with the judge looking them over. Then, the number of ways things like "reasonable doubt" is played with in Court, by both sides, leaves you with no real idea what the word REASONABLE in reasonable doubt, means. few people explain it well, and when the do the other side jumps in and confuses you again. "We have 10 pieces of evidence, but one piece of evidence relies on a technique that is now regarded as problematic by the FBI" - so is there "reasonable" doubt? that is the very point of the often game playing done by attorneys. Godot Chúc mừng năm mới 18:15, 20 February 2013 (UTC) - I understand that well. It doesn't change the fact that this jury was made up of idiots. It's just been on the news here - the judge had a real go at them too. rpeh •T•C•E• 18:18, 20 February 2013 (UTC) - "A fundamental deficit in the understanding of their role" was one quote from the judge. rpeh •T•C•E• 18:21, 20 February 2013 (UTC) - There are some silly questions there, but I personally would prefer that they ask them rather than getting their job wrong. I have been involved in one jury trial (peripherally) where our client would have benefited from a jury asking questions as dumb as these. DamoHi 20:18, 20 February 2013 (UTC) - Reading through those questions again, particularly 5-10 and the wording of Q5, leads me to think that probably only 1 or 2 jurors were completely useless and the rest had had enough of banging their head against a wall trying to convince them to consider the case properly. I hear that the jury was hung in the end, so there might be a retrial [1]. Interesting case all in all. I'll bet she isn't the first one to take one for a friend or partner. (allegedly) DamoHi 03:46, 21 February 2013 (UTC) - All juries aren't qualified for what they're asked to do, and in many cases get fuck all instruction in changing that. I imagine this is probably average. d hominem 13:16, 21 February 2013 (UTC) - On the contrary, juries are perfectly qualified for their actual job, it's just that people are desperate to over-think it. All you have to do is sit there, pay attention, and decide whether you believe beyond a reasonable doubt that the defendant is guilty (of each individual charge). Asking how to define "reasonable doubt" is an example of over-thinking it. We specifically disqualify people "in the business" from jury duty because they will over-think everything. The main problem for juries is one that you can't solve by changing the criteria - the real world is a messy, complicated place. full of uncertainty, quite unlike episodes of Law and Order, let alone CSI or Miss Marple. So ultimately it's up to your peers to figure out whether you're guilty, because we have no reliable means and yet we must make a decision anyway. Justice is imperfect. - One thing that surprises outsiders is that despite "beyond a reasonable doubt" the majority of complete trials have guilty outcomes. Out of defendants accused (of things that would lead to crown court, ie jury trial) 5.4% would result in "not guilty", 7.9% "guilty" by a jury, all the remainder don't get as far as asking the jury to make a decision (most commonly because the defendant pleads guilty). The reason is that (in the UK) prosecutors are forbidden from proceeding if they don't think they can secure a guilty verdict, so a typical prosecutor isn't bringing many toss-up cases before the court in the first place. Of course telling juries that would screw things up even further, because they'd know that at least one person (the prosecutor) is probably certain the person in the dock is guilty before they even start, and that person has seen more of the evidence than they have and knows more about the law. Tialaramex (talk) 15:46, 21 February 2013 (UTC) - "All you have to do is sit there, pay attention..." - yeah, exactly. bomination 00:22, 23 February 2013 (UTC) - One thing that surprises outsiders is that despite "beyond a reasonable doubt" the majority of complete trials have guilty outcomes Presumably, the defendant wasn't arrested on a whim, and the police has been investigating them and building a case. Presumably. I'm not a lawyer or police officer by any means, and you sometimes hear about people getting framed or shoddy policework--"Shut up, Brx." 01:37, 23 February 2013 (UTC) RW Recommends[edit] Anyone got any good games for Android? --TechCheeselament 22:29, 21 February 2013 (UTC) - World of Goo. (Worth getting on PC as well if you don't have an Android.) Nebuchadnezzar (talk) 22:32, 21 February 2013 (UTC) - If you like actual puzzles check out Trainyard. If you're not sure if you like puzzles, try Trainyard Express first. Don't worry if you find the early puzzles easy, the game holds back the really mind-bending stuff until you think you've beaten it, and then "bonus" puzzles open up. If you prefer one button endless running mindlessnes, try Jetpack Joyride. All the usual bird flinging, pig rolling or fruit ninja-ing stuff is also available on Android if you want to do things you saw your grandmother doing. Tialaramex (talk) 23:56, 21 February 2013 (UTC) - It does seem like you should say what kind of games you enjoy/are looking for. If you're into tower defense games and haven't played it yet, I think there's an android app for Kingdom Rush--Token Conservative (talk) 00:05, 22 February 2013 (UTC) - Very much depends what kind of games you like. I spent a lot of time on Flow Free and Flow FRee Bridges Worm (talk) 10:07, 22 February 2013 (UTC) - Oh, I love World of Goo! What's the highest you've built your tower to? Immortality's fun, except when you become a two-headed monster Talk to me or view my art 01:47, 22 February 2013 (UTC) - I was quite impressed with the Inotia series of RPGs. If you like that sort of thing, at least. They're a bit grindy and repetitive (like, every mission is "go collect 10 of [X]") and the story is simple hero saves the world stuff... but they're good fun to have a bash at and quite addictive. d hominem 00:02, 23 February 2013 (UTC) Am I an old fart?[edit] I only listened to Gangnam Style a few weeks ago, and my response was to wonder what the big deal is. I've not listened to the Harlem Shake yet. Should I just start yelling at kids to get off my lawn and be done with it? MDB (the MD is for Maryland, the B is for Bear) 12:04, 22 February 2013 (UTC) - In answer to your question, yes, you should just start yelling at kids to get off your lawn. It's great fun. Mcnamara12 (talk) 12:59, 22 February 2013 (UTC) - The appeal of Gangnam Style wasn't the song itself but the music video. Same with Harlem Shake. Osaka Sun (talk) 13:05, 22 February 2013 (UTC) - Things like Gangnam and Harlem embody everything that is good about YouTube and everything that is bad about popular culture. Sit back in your rockin' chair, sip your g&t, yell at the kids and listen to this to restore your faith in humanity. --PsyGremlinPrata! 13:13, 22 February 2013 (UTC) - Sorry, but that was utter crap. A mediocre violinist jumping around in snow while playing over a drum machine is not going to cheer anyone up. Here is some proper "make the old fart happy" music VOXHUMANA 14:27, 22 February 2013 (UTC) - Now that's music. Godot The ablity to breath is such an overrated ability 14:29, 22 February 2013 (UTC) - But just to prove old farts can listen to music less than 20 years old, here is an extraordinary song/clip by some band in Newfoundland. VOXHUMANA 14:32, 22 February 2013 (UTC) - Lindsey Stirling hops around and grimaces most fetchingly, and the video was slickly produced, but enh. Here is a trombone quartet's recent cover of Kansas' Wayward Son. Settle back in the barca-lounger, dim the lights, free your soul, and drift away... Sprocket J Cogswell (talk) 14:57, 22 February 2013 (UTC) - Harlem Shake and Gangnam Style have at least done one good thing: American men now dance instead of just sitting in corner drinking.--"Shut up, Brx." 15:01, 22 February 2013 (UTC) - There has to be a better way! --TheLateGatsby (talk) 15:16, 22 February 2013 (UTC) For what its worth, when I said I "listened to" Gangnam Style, I watched the video. I still didn't see the point. MDB (the MD is for Maryland, the B is for Bear) 16:06, 22 February 2013 (UTC) - Back in my day... Nebuchadnezzar (talk) 17:54, 22 February 2013 (UTC) - I still haven't seen it, so you're not alone there. I heard it come on the TV or radio or something very recently and for the first time I heard it played it long enough to hear more than the "oppa oppa" bit. It was like "don't know what this bit is haven't heard the full song before". Still don't know how it ends (then again, I can say that about Lord of the Rings). - Crystallise isn't my favourite Lindsay Sterling track by a long shot, so I raise you four Scandinavians and some cellos. theist 00:10, 23 February 2013 (UTC) - So Frodo decides it's time to leave, and ends up going to hogwarts, where he meets this sparkly vampire and they all get married. Godot The ablity to breath is such an overrated ability 00:32, 23 February 2013 (UTC) - My brain has just melted processing that sentence. bomination 01:08, 23 February 2013 (UTC) - The story's kind of bland. It's about this guy named Dumbledore Calrissian... Nebuchadnezzar (talk) 02:50, 23 February 2013 (UTC) Suggestion for the main page/mission statement.[edit] So, from what I've seen here, I wonder if the "Our purpose here at RationalWiki includes" section might add a point number 4: "Educating people about the basic concept that fucking people without their consent is generally a bad idea." It seems to be a pretty popular topic. Nailed a retread to my feet and prayed for better weather. 22:07, 19 February 2013 (UTC) - Do you mean... 5? Uke Blue 22:12, 19 February 2013 (UTC) - Was it really necessary to have yet another section on rape? DamoHi 22:14, 19 February 2013 (UTC) - Sorry, yeah, 5. And poking through Recent Changes...maybe? I mean, there seem to be at least 3 active threads on the topic, so it seems pretty popular. Nailed a retread to my feet and prayed for better weather. 22:20, 19 February 2013 (UTC) - They're all the same thread, which has metastasised. Peter mqzp 22:20, 19 February 2013 (UTC) - I agree with Damo on this one. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 22:29, 19 February 2013 (UTC) Welcome to RW, PS&L. Unfortunately, it seems to be MRA season recently, so yes, there are a lot of mind-numbing disputes about feminism & rape around. But there is other non-rape-related stuff going on too; I hope so anyway. Wěǎšěǐǒǐď Methinks it is a Weasel 23:35, 19 February 2013 (UTC) - I'm going to be discussing other stuff, because this makes even me feel tedious. As for this MRA BS, I think it's about to make me identify as a Feminist, or at least be very openly supportive of Feminism. I agree with 90% of everything they say, probably more. These MRA people make me die inside. There was one idiot I saw on Reddit who was talking about how he joined the MRA movement. He mentioned that there was a guy he knew whose "life had been ruined" because a girl falsely accused him of rape. While people shouldn't mindlessly believe rape accusations (or any accusations for that matter), I'm sure that most of the people who say that rape accusations are false are just trying to justify or cover up what they did. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 02:15, 20 February 2013 (UTC) - Weren't you complaining about men being pansies or am i merging people in my mind?--MikallakiM 02:58, 20 February 2013 (UTC) - I don't think that MRAs would be agreeing with something that was considered sexist against men. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 03:55, 20 February 2013 (UTC) - Since you have suggested that feminists exaggerate rates of rape & inhibit progress towards women's rights, I'm taking this "very openly supportive of Feminism" declaration with a pinch of salt. Wèàšèìòìď Methinks it is a Weasel 13:25, 20 February 2013 (UTC) - That is the 1%... That I disagree with. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 16:26, 20 February 2013 (UTC) - 'Basic Feminism 101 For Dummies' should mention that, like "atheism" there's not a single school of thought that everyone consistently agrees with. You have second wave, third wave, sex positivity and radical feminism just to mention a few big ones off the top of my head. The feminism of Blue isn't quite the same as the feminism of Andrea Dworkin, the feminism of me (in so much that I don't really identify as a feminist for Reasons) isn't the same as the feminism of Germaine Greer. It might not be even "1% of feminists" that you disagree with, it might be a significantly higher number than that - but the important thing is to identify the specific beliefs and ideas you identify with, rather than a generalised ideology. narchist 00:18, 23 February 2013 (UTC) - Hah, the feminism of Germaine Greer now doesn't even agree with the feminism of Gemaine Greer when she wrote The Female Eunuch. I saw her in Liverpool a couple of years ago and during the Q&A session some earnest feminists were banging on about her old writings. She told them that things have changed and that feminism now is different from the feminism of 40 years ago. Lily Inspirate me. 21:26, 23 February 2013 (UTC) Random chemistry question[edit] Anybody here knows what is the largest known molecule containing no carbon? By "large" I mean the molecule with the most atoms. --Tlaloc (talk) 21:27, 21 February 2013 (UTC) - You can make silicon dioxide lattices, which could theoretically go on for ever I suppose. Peter mqzp 21:29, 21 February 2013 (UTC) - Ask Sterile on his userpage. He doesn't appear to look at SB much. 21:35, 21 February 2013 (UTC) Aren't diamonds technically one huge molecule? Innocent Bystander (talk) 22:34, 21 February 2013 (UTC) - Duh! As soon as I pressed 'save page' I realised how dumb my answer was. Innocent Bystander (talk) 22:35, 21 February 2013 (UTC) - Look on Wikipedia for inorganic polymers. Polythiazyl has the structure (SN)x, for example. --Martin Arrowsmith (talk) 03:16, 22 February 2013 (UTC) - Peter's right about silicon dioxide (quartz) and there are other large silicates, too. Black phosphorus is a covalent network compound, and there might be some polyphosphates. I'm sure there are silicon-based polymers. sterilesporadic heavy hitter 00:32, 22 February 2013 (UTC) - Other fun ones: polyborazine, polysilazanes, and polythiazyl. sterilesporadic heavy hitter 00:37, 22 February 2013 (UTC) - I have no idea how you'd determine which is the largest. sterilesporadic heavy hitter 00:40, 22 February 2013 (UTC) - If you want a spinning molecule gif, let me know. I haven't made one in ages. sterilesporadic heavy hitter 00:51, 22 February 2013 (UTC) - Thanks! I told a friend that only carbon could make molecules big enough to be the basis of life, but then I thought I may be wrong. I'll let you know if I need a gif. Not likely, but thanks! --Tlaloc (talk) 02:13, 22 February 2013 (UTC) - What's the largest you could render, out of interest? Peter mqzp 03:51, 22 February 2013 (UTC) - You can do large molecules such as proteins. With Jmol you could use ribbons or atoms or whatever. With Blender you can use only atoms (or easily at least, although there's still software I haven't played with), but you can more readily control the movement through the molecule. There's a python-based program I haven't played with. Most of these are even free, although I'm not sure about the blender plugins. It's just a matter of getting the right input file. Proteins come as pdb files many times. I'll make you one when I get a chance. Look at the genetics page for a small DNA strand. The only limit on size is the time for the computer to render it. sterilesporadic heavy hitter 13:07, 22 February 2013 (UTC) - If you're going to play with big molecules, use Qutemol. It accepts structures in PDB form, which is fairly easy to export out of any of the main free/trial viewers like DS Visualiser or ChemCraft. d hominem 23:55, 22 February 2013 (UTC) - Depends what you want. I don't think Qutemol works on my laptop, but Jmol works anywhere, can be interactive including customizable buttons, checkboxes, etc., and works with every known file format known to chemists. Actually, I recommend Avagadro, too. sterilesporadic heavy hitter 12:19, 23 February 2013 (UTC) Silicon-based life has been speculated but seems unlikely. Life requires the ability to make and break polymers, and carbons tendency to make bond single and double bonds as well as make four of them makes that likely. Si=O bonds tend to be too reactive. sterilesporadic heavy hitter 02:27, 22 February 2013 (UTC) - I've talked about this with... someone... chem prof maybe? Been awhile. Anyways, I think the basic argument he made is that to be the basis of life (like carbon) an atom would need to make bonds that are strong, not very reactive, but can be broken down, and would need a lot of valence electrons, so carbon is really your best bet, but silicon is probably the next most likely. --Token Conservative (talk) 03:02, 22 February 2013 (UTC) - That was my understanding too. Silicon is the "most plausible" due to having some of the same properties as carbon. But it definitely has a lot of drawbacks, like the fact long-chain molecules tend to react with water, and silicon dioxide is a solid at temperatures where water is liquid. Plus silicon is roughly 1000 times more abundant that carbon on earth, yet carbon still formed the basis of life. VOXHUMANA 04:14, 22 February 2013 (UTC) - I am reminded of the old Star Trek episode that featured a silicon-based lifeform which was apparently an intelligent rock. ГенгисRationalWiki GOLD member 07:33, 22 February 2013 (UTC) - It wasn't a rock, it was a blob... thing that could tunnel through rock as easily as we walk through air. --Revolverman (talk) 17:40, 22 February 2013 (UTC) - The simple fact is that when you actually take a look at biological molecules and total up the oxygen, nitrogen, hydrogen and phosphorous in them, calling them "carbon based" is almost hilariously wrong. sshole 23:58, 22 February 2013 (UTC) How to sell horrible ideas[edit] Easy, make an obnoxiously hip website targeted at college kids. I'm so awed by how over-designed it is that I can barely focus on its terrible content, which is entirely about the evils of contraception and the virtues of "fertility awareness." (And what is "fertility awareness," you may ask? Two words: psychic vaginas.) Radioactive afikomen Please ignore all my awful pre-2014 comments. 04:11, 22 February 2013 (UTC) - So, my idea to make a fad diet needs to be targeted at hipsters? Excellent!--Token Conservative (talk) 04:18, 22 February 2013 (UTC) - It's almost as subversive as a Che Guevara t-shirt! Nebuchadnezzar (talk) 04:26, 22 February 2013 (UTC) - What do you mean obnoxiously hip and over-designed? It's not hip if you're a fucking douchenozzle. That site doesn't even come close to passing for hip. It's like you showing up at CBGBs in 1980 in a Blondie hoodie. It's not over-designed. It's mostly pretty poorly designed. The typography is not good. Nice typeface but bad contrast and font weight. Bad color. They need to learn what a grid is. The jQuery carousel is amateur hour. This post was mostly unresponsive. I'm going to get back to learning Objective-C drunk. Lates. 05:06, 22 February 2013 (UTC) Not a good sign...[edit] Seems the unwashed masses aren't quite willing to put their money where their mouths are. --PsyGremlinParla! 08:11, 22 February 2013 (UTC) - Can they not survive on advertising? That seems a bit odd. --DamoHi 08:51, 22 February 2013 (UTC) - Site for a movement that is (well, at least claims to be) against taxes asking for its members to dig deep. Day = made. Polite Timesplitter come shout at me for being thick 08:56, 22 February 2013 (UTC) - Unless my history is completely wrong, the actual sovereign state that became today's United States of America already did this once. Patriots (read, rich people) wanted their new state to be funded voluntarily and not to borrow or have any tax-raising powers, but weirdly not enough money was contributed to keep it running and in the end you got your current USA complete with national debt and various taxes because that actually works. Tialaramex (talk) 09:18, 22 February 2013 (UTC) - It would be interesting to see statistics (if any exist) on charitable donations by people who identify as libertarian, anti-tax or anti-welfare (not to this website specifically, but donations in general). Their argument is usually that charities should replace the role of welfare (for things like lifelong disability, senior citizens), but I'm very sceptical about whether these people are really big givers. Wěǎšěǐǒǐď Methinks it is a Weasel 13:38, 22 February 2013 (UTC) - I'm sure they're all in favour of it... as long as it's other people who do the giving. We're talking about a mentality that decided that what post-earthquake Haiti needed was Bibles... --PsyGremlin講話 13:43, 22 February 2013 (UTC) - I imagine that the site is managed by profligates who spent all the money on hookers and blow. Seriously, their site is laggy as hell and it takes 30k to run? We should give them Trent.--"Shut up, Brx." 15:05, 22 February 2013 (UTC) - I hope Teabook keeps going, if not, either they'll submit to King Mark I, or they'll flood Google+ -and there are enough of them there as it is. --TheLateGatsby (talk) 15:14, 22 February 2013 (UTC) @ Tialaramex That was the US under the Articles of Confederation, and it was only the Federal level that had no taxing authority, the states did. The issue was that the US was indebted to (among others) France, and the income the Feds were getting from donations was less then the interest on the federal debt. The people in charge decided "fuck it, we need money" and starting taxing whiskey (the first national tax was a vice tax, go figure), which led to the Whiskey Rebellion (of course the first national insurrection was a bunch of drunken morons), which led to the equivalent of Congress deciding to draft a whole bunch of reforms to the Articles of Confederation, which became the Constitutional Convention, and that led to the drafting of the Constitution, the writing of the Federalist Papers, the Anti-Federalist Papers, the Bill of Rights, and blah blah blah. --Token Conservative (talk) 16:27, 22 February 2013 (UTC) Alcohol Tobacco and Firearms[edit] - Although many of the men involved in the Whiskey Rebellion were surely drinkers, at the time whiskey was also a commodity used in bartering along the frontier. --TheLateGatsby (talk) 16:32, 22 February 2013 (UTC) - If the Tea partiers wished to look "smart" they could read the Anti-Federalist papers and gain a bit more intellectual thrust to their argument. Instead, they compensate for their ignorance of civics with blind rage. --TheLateGatsby (talk) 16:35, 22 February 2013 (UTC) - As I recall, the tax was on the manufacture of whiskey, so it wouldn't have severely inhibited its use as a medium of exchange, especially since alternatives like tobacco were available. And if the Tea Party wanted to look smart about the constitutional matters, they would read the Federalist Papers, the context in which the US drafted the Constitution, and maybe read some Supreme Court decisions. But that's like asking a blow fly not to be attracted to blood--Token Conservative (talk) 16:37, 22 February 2013 (UTC) - Also, the Whiskey Rebellion post-dated the Constitutional Convention. Hydrogen and Time (talk) 16:38, 22 February 2013 (UTC) - The Whiskey Rebellion occurred in Pennsylvania -I might be mistaken, but isn't that too far north to grow tobacco? And you're right that the Federalist papers would be more useful, but I used "smart" in scare-quotes because the Tea Partiers are just looking for Founding Fathers to fetishize (deify?), they don't want to understand. --TheLateGatsby (talk) 16:56, 22 February 2013 (UTC) - Tobacco is still a cash crop in Lancaster County, PA, or it was the last time I saw. Sprocket J Cogswell (talk) 17:15, 22 February 2013 (UTC) - What you would do to use tobacco as currency is to make ropes of it, like this, and use that. And I swear there was some rebellion in the lead up to the Constitutional Convention.--Token Conservative (talk) 17:25, 22 February 2013 (UTC) - Shays' Rebellion in Massachusetts. --TheLateGatsby (talk) 17:34, 22 February 2013 (UTC) - Goes to show what I know. Could farmers grow Tobacco in Western PA, where the Rebellion started? I'm not finding evidence of it. --TheLateGatsby (talk) 17:37, 22 February 2013 (UTC) - Shay's Rebellion, goddamn it, thank you Gatsby.--Token Conservative (talk) 17:46, 22 February 2013 (UTC) - You're welcome. All this talk about tobacco makes me want some Cavendish... --TheLateGatsby (talk) 18:21, 22 February 2013 (UTC) -. - Easy enough to find where that came from. Sprocket J Cogswell (talk) 01:27, 24 February 2013 (UTC) Set to go live very soon in the U.S. Do we panic now? (...Or is this all overblown?) Uke Blue 03:21, 23 February 2013 (UTC) - Torrent clients encrypt uploads and downloads, and I use Peerblock. Moreover, I mostly just download television programs as they come out. I don't think I'll be affected. If I am, I hope it'll just be a warning or a small fine instead of a huge lawsuit. Then I'll scale back my activities. And it's not like the P2P community won't figure something out--"Shut up, Brx." 04:02, 23 February 2013 (UTC) - Peerblock is the Homeopathy of file sharing. --2.39.39.47 (talk) 10:22, 23 February 2013 (UTC) - This is a good chance to start using I2P. --83.84.137.22 (talk) 20:22, 23 February 2013 (UTC) The Seven Deadly Sins around the US[edit] Maps 32-38 show how much the Seven Deadly Sins are committed around the US; the Bible Belt, of course, is guilty of them the most. (The other 31 maps are pretty cool too.) Sam Tally-ho! 01:56, 23 February 2013 (UTC) - Isn't the Bible Belt largely Protestant? Cardinal sins are a Catholic affair, and a less than canonical one at that. They were just meant as a guideline for believers, have changed over the years, and may just need to be updated in keeping with modern priorities.--"Shut up, Brx." 04:04, 23 February 2013 (UTC) - I've always been a sloth, lust, and gluttony kind of guy. Pride, wrath and envy don't do much for me, and in my situation avarice is pretty much a forlorn hope. Sprocket J Cogswell (talk) 04:22, 23 February 2013 (UTC) Test blog for layout[edit] is a test version of the RW blog, for me to mess with layouts. This is approximately what I meant by "magazine-style layout". This theme is EvoLve, which is almost there but not quite. The front page is nearly right, the single-post pages look crappy. Appearance critique welcomed, 'cos everyone thinks they're a design genius. First thing to do is make the titles show full length every time. Second is to make the sidebar a little thinner. Third is to make the menu less obnoxious (or just gone). Fourth is to find a font for the posts that doesn't suck. Suggestions for other themes that are magazine-ish are also welcomed. (Assume we have a budget of $0 for commercially supported themes.) - David Gerard (talk) 18:58, 23 February 2013 (UTC) - So far this theme appears to be a miserable failure on phones - full-width header, no sidebar, huge space between items (on Android, BlackBerry OS5 browser and Opera Mini). So much for "responsive" - David Gerard (talk) 20:05, 23 February 2013 (UTC) - Apparently. I still can't find the bit of PHP or CSS dictating that. (Though seeing how fucked the theme is on mobile, I'm less inclined to fiddle with it further.) Hence looking for other themes to give a front page like that, with three columns of one-paragraph article links - David Gerard (talk) 22:07, 23 February 2013 (UTC) - Found it! Looks much less shit now. Except on mobile, where it's still fucked. They expect people to pay for the full version of this shite. "Responsive" my arse - David Gerard (talk) 23:49, 23 February 2013 (UTC) - Much better, though by the looks it is utterly unusable on a mobile. Is it possible to have completely different themes for each type? Peter mqzp 00:03, 24 February 2013 (UTC) - Possibly - doing so is largely arse. I've got the front page about right, though fixing the frankly shoddy CSS on the single post pages will be laborious at best so I'm not entirely keen to bother. I probably will, though, because it's there(tm), and I'm learning more generically useful things about WordPress. Also, I left a really pissy review on wordpress.org. There must be other ways to get that sort of 3+1 column front page - David Gerard (talk) 00:52, 24 February 2013 (UTC) - Yeah, just booted it up on the Opera Mobile emulator... do you want me to take a look at some of the CSS too if you don't have a lot of time? I've got a lot more experience with responsive layouts now. - I'm just looking this up in Dragonfly and I can see what you mean by the mess - I'm sure there's a lot of completely redundant shit for a layout that simple. A quick look around shows that .container has "min-width" set to 960px which is at least what I think is borking the logo and pushing it out to the right and off screen on mobile resolution. That should be allowed to shrink much smaller, closer to 300px for a responsive layout. But overall I think you want to find something simpler, it's specifying widths in px far too much everywhere making a cascading soup of terrible. theist 14:38, 24 February 2013 (UTC) - Yeah, shit like that. That CSS is simply sloppy work. (I'm just picturing what the designers at work would say about it.) I can hardly believe this is the taster they think will sell the theme - David Gerard (talk) 14:39, 24 February 2013 (UTC) - Perhaps like the magazine project I was involved in, the CSS started really simple and functional, and then got ripped apart by the guys making the PHP back-end and left in a state. d hominem 14:47, 24 February 2013 (UTC) - I'm gonna hafta learn CSS and WordPress, aren't I. Oh well, there's lotsa spare cash in doing WordPress themes. Let's see if I can adapt Twenty Twelve into my desired 3+1 layout - David Gerard (talk) 19:07, 24 February 2013 (UTC) Glenn Beck derides wrestling[edit] In response to Glenn Beck claiming that a storyline is making a mockery of the Tea Party movement, WWE has posted a video featuring the two characters, Jack Swagger and Zeb Colter, who break character and invite Beck to a live show.--Cms13ca (talk) 03:59, 24 February 2013 (UTC) - Glenn Beck would be right at home in such a den of over-dramatic, scripted actors. ZING! Radioactive afikomen Please ignore all my awful pre-2014 comments. 04:38, 24 February 2013 (UTC) - I'd pay to see Glenn Beck eat a Swagger Bomb of a cage. --Revolverman (talk) 08:31, 24 February 2013 (UTC) - Considering the WWE and its fanbase are quite aligned with wingnuttery, this is quite funny. Osaka Sun (talk) 09:18, 24 February 2013 (UTC) - Acutally... no, they arn't. It turns out WWE's fanbase is mostly Democratic, not Republican. In the States anyway. --Revolverman (talk) 20:26, 24 February 2013 (UTC) - Glenn Beck is nothing but a fat bumbling fool. If Andy, Karajou and Ken had a love child Glenn would be it. GhostofTK (talk) 09:55, 24 February 2013 (UTC) - Weirdest three-way ever. But it's still kinda hot to think of. Nailed a retread to my feet and prayed for better weather. 18:30, 24 February 2013 (UTC) Why I hate liquid threads[edit] - 19 February 2013 20:56 User talk:Weaseloid (14 changes | hist) . . (+4,754) . . [WaitingforGodot; Weaseloid (3×); Hipocrite (4×); Inquisitor Ehrenstein (6×)] - Nm 20:55 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (32) (diff | hist) . . (+69) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:53 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (31) (diff | hist) . . (+247) . . PowderSmokeAndLeather (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:53 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (30) (diff | hist) . . (+72) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:52 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (29) (diff | hist) . . (+286) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:52 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (28) (diff | hist) . . (+88) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:50 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (27) (diff | hist) . . (+59) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:49 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (26) (diff | hist) . . (+326) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:49 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (25) (diff | hist) . . (+118) . . Monarchofascist (Talk | contribs | block) (Reply to Let's call this your only chance) - m 20:48 Mercury (diff | hist) . . (+8) . . Tracer (Talk | contribs | block) (→Real: Linked to the Dimethylmercury article on Wikipedia) [rollback] - Nm 20:48 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (24) (diff | hist) . . (+169) . . PowderSmokeAndLeather (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:47 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (23) (diff | hist) . . (+185) . . SharonW (Talk | contribs | block) (Reply to Let's call this your only chance) - 20:45 (Block log) . . [David Gerard; Damo (2×); Hipocrite (2×); Inquisitor Ehrenstein (4×)] - Nm 20:45 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (22) (diff | hist) . . (+720) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - 20:41 Conservapedia talk:What is going on at CP? (12 changes | hist) . . (+5,573) . . [Nutty Roux; Night Jaguar; JeevesMkII; 171.33.222.26; Ochotonaprinceps; Oldusgitus; Spud; Vulpius; Scream!!; Polite Timesplitter; SirChuckB (2×)] - Nm 20:41 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (21) (diff | hist) . . (+156) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - N 20:40 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (18) (2 changes | hist) . . (+227) . . [Inquisitor Ehrenstein (2×)] - Nm 20:39 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (20) (diff | hist) . . (+563) . . PowderSmokeAndLeather (Talk | contribs | block) (Reply to Let's call this your only chance) - ! 20:39 Talk:Globalresearch.ca (diff | hist) . . (+434) . . Ahuman (Talk | contribs | block) (→Agregator?: ) [rollback] - N 20:37 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (19) (2 changes | hist) . . (+412) . . [Inquisitor Ehrenstein (2×)] - N 20:37 User talk:JohnnyH (diff | hist) . . (+195) . . Stabby the Misanthrope (Talk | contribs | block) (Created page with "{{Welcome}} Greetings, JohnnyH, and thank you for joining the most rational of {{noun}} wikis! I hope you enjoy your time here. ~~~~) - 20:34 Sheik Feiz Muhammad (diff | hist) . . (+57) . . Random bloke (Talk | contribs | block) (added snark) [rollback] - ! 20:33 Globalresearch.ca (2 changes | hist) . . (0) . . [Ahuman; PowderSmokeAndLeather] - Nm 20:32 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (17) (diff | hist) . . (+210) . . Damo (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:31 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (16) (diff | hist) . . (+118) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:29 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (15) (diff | hist) . . (+142) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:27 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (14) (diff | hist) . . (+335) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:27 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (13) (diff | hist) . . (+12) . . Monarchofascist (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:26 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (12) (diff | hist) . . (+96) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:25 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (11) (diff | hist) . . (+11) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:24 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (10) (diff | hist) . . (+57) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - N 20:24 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (6) (2 changes | hist) . . (+163) . . [Inquisitor Ehrenstein (2×)] - Nm 20:24 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (9) (diff | hist) . . (+243) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:23 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (8) (diff | hist) . . (+28) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:22 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (7) (diff | hist) . . (+143) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - N ! 20:22 Forum:Hostility? (7 changes | hist) . . (+1,465) . . [Hipocrite (3×); JohnnyH (4×)] - 20:19 User:Inquisitor Ehrenstein/sandbox (3 changes | hist) . . (+875) . . [Inquisitor Ehrenstein (3×)] - Nm 20:18 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (5) (diff | hist) . . (+160) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:18 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (4) (diff | hist) . . (+233) . . Weaseloid (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:17 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (3) (diff | hist) . . (+96) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) - N ! 20:16 User:JohnnyH (3 changes | hist) . . (+416) . . [JohnnyH (3×)] - 20:16 User talk:Nutty Roux (2 changes | hist) . . (+617) . . [Nutty Roux (2×)] - Nm 20:15 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (2) (diff | hist) . . (+59) . . Hipocrite (Talk | contribs | block) (Reply to Let's call this your only chance) - Nm 20:14 Thread:User talk:Inquisitor Ehrenstein/Let's call this your only chance/reply (diff | hist) . . (+104) . . Inquisitor Ehrenstein (Talk | contribs | block) (Reply to Let's call this your only chance) SophieWilder 21:03, 19 February 2013 (UTC) - I feel your pain. Do what I do. Peter mqzp 21:07, 19 February 2013 (UTC) - EXCELLENT! Thank you Peter!!!! Scream!! (talk) 21:10, 19 February 2013 (UTC) - It's only showing up like that if you use enhanced recent changes, for what it's worth. Uke Blue 21:12, 19 February 2013 (UTC) - No it's not. It's a s bad without enhanced RC. Scream!! (talk) 21:18, 19 February 2013 (UTC) - I do use enhanced recent changes, i just want it to do its thing on liquid threads like it does every other page with multiple edits. And peter, I do want to actually read what people are saying, I just don't want every single edit to have its one RC line. SophieWilder 21:20, 19 February 2013 (UTC) - You would think enhanced RC would let you do that, wouldn't you? It must be possible somehow, but google isn't helping. Peter mqzp 21:23, 19 February 2013 (UTC) - (Edit conflict) This is almost the sole weakness of LiquidThreads (that and your browser doesn't remember text meant for a thread post. It's terrible during a block war). If LQT and recent changes could reach a state of harmony (such as having each post in a thread listed under a single tab, like with Enhanced Recent Changes), world hunger could be abolished.--"Shut up, Brx." 21:14, 19 February 2013 (UTC) I've found it to be very useful. You don't get edit conflicts at all. It also makes it less likely that people will remove content from their talk pages or remove content without archiving. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 21:34, 19 February 2013 (UTC) - Rather Irrelevant when your forcing people to go out of their way to use RC when you decide to get into a large argument, --MikallakiM 21:35, 19 February 2013 (UTC) - Get rid of the damn ugly annoying thing. DamoHi 21:38, 19 February 2013 (UTC) - LiquidThreads is a damned good first step toward creating a better talk page system than the piece of shit default MediaWiki parser. It is used on the talk pages of at least 14 users (I counted) plus a few who've switched back and on a few forums; if we were to uninstall it we'd wipe out years of talk page history instantly. Do you want that? If you'd like to move to mandatory phaseouts of LQT, that's a discussion like any other and should probably happen on a forum or project page. But deinstalling it is a no go. Uke Blue 22:11, 19 February 2013 (UTC) - LiquidThreads is unloved and largely unmaintained. It may or may not work at all next time we upgrade the wiki. (We can stay on 1.19 for a few years, it's a Long-Term Support version, but that's not forever.) It's regarded by the WMF as a miserable failure and is still the butt of jokes amongst WMF staff. It has no future and it is going to die. We were foolish to adopt it, thinking it would have a future - David Gerard (talk) 23:20, 19 February 2013 (UTC) - @Blue. I was really just having a grump. Although I find LQT annoying as hell and ugly, I wasn't seriously proposing their removal. DamoHi 23:35, 19 February 2013 (UTC) - What are the WMF staff saying about it? It seems good to me, but it works better on MediaWiki.org where it's used most. I use it at sturmkrieg.us and plan to implement it on other projects (provided I don't find out anything really bad about it) since it's a good talk page system, or at least superior to the standard talk page system. I think it's use makes it less likely that other users will be tempted to edit the posts of other users, and it's threaded nature would be good at preventing users from deleting unfavorable posts, such as warnings and block notices, from their talk pages. Its dynamic nature also is good because it archives talk pages automatically, which prevents users from clearing their talk pages instead of archiving them properly. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 01:58, 20 February 2013 (UTC) - Listen to David, I'd say. A longer list of one user's complaints is here. sterilesporadic heavy hitter 02:43, 20 February 2013 (UTC) - If it may break anytime in the near future, would that mean currently existing threads would become unviewable? If so, like I said - that's thousands of threads over several years down the drain. Yes, it was foolish for Trent and Nx to let us run wild with it if it's so unstable. Seme Blue 03:38, 20 February 2013 (UTC) - I don't think it's possible to get anything that's easy to manage out of the API, so yeah. I'm dumping a sample of what the output looks like in JSON. LQT doesn't render using the wiki's templates so all the HTML and CSS is in each thread, rather than simply the raw text, revid, user, etc for normal revisions. That stuff's not a big deal to strip but reconstituting it looks like a drag. 04:53, 20 February 2013 (UTC) - I see that. I might get rid of it, and possibly replace it with DPL Forums for user talk pages. I'll need an extension though that gives the new messages notice for subpage edits. I'll probably leave regular talk pages as normal. I do agree with Siebrand though, I do hate the regular talk pages more. For one thing, they're far less intuitive and prone to noobs not signing their posts. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 05:04, 20 February 2013 (UTC) - At the time, WMF was saying "this is the future, trust us!" even though it wasn't finished and a lot of people had said "wow, this thing sucks". It then became clear that its problems were intractable and WMF pretty much abandoned it. The "WMF staffers" bit was talking to some drunk last week. That it's still used on mediawiki.org is good, this suggests there will be some sort of exit path - dumping to a wiki page, for example. But LQT is a zombie - David Gerard (talk) 08:40, 20 February 2013 (UTC) - Stuff like the above happens whenever you look at Recent Changes. LQT is not the offender there, and you can always hide the thread space from RC and your watchlist. It's a shame people hate LQT for whatever reason, as seriously using this POS system of hacking an editable page with indents and code and signatures is objectively terrible. bomination 13:36, 21 February 2013 (UTC) --"Shut up, Brx." 02:29, 22 February 2013 (UTC) - We have really been damned since before we started. Wikis were designed so that multiple people can develop a document together. I don't know what the original developers had in mind but I suspect he expected people working on the document to communicate via email. So instead of people shuffling an attachment back and forth and editing old version and other such things they could look at its current state any time they wanted. Wikis were never designed for a project like Wikipedia, never mind being used as a forum. That is why Citizendium has a forum, that is why we used to have a forum. It is only because this started as basically a hang out for people that met on a wiki it is even a wiki. But yeah trying to force something into architecture (or other buzzword) that it was never made to do was a noble but fruitless goal. - π 05:18, 24 February 2013 (UTC) Assuming that it is continued, LQT is good in that it prevents noobs from clearing their talk pages instead of archiving, or problematic users from removing discussions about unfavorable activities. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 05:50, 26 February 2013 (UTC) Bush more regulatory than Nixon[edit] That's what Reason says 99.235.129.26 (talk) 13:09, 22 February 2013 (UTC) - Not at all surprising. Look up Nixon and the guaranteed annual income and be prepared to be shocked at what the GOP used to stand for. Also, the EPA.EDIT: Wait, sorry, I'm arguing for why it SHOULD be surprising. Sorry early here/no coffee yet. Nailed a retread to my feet and prayed for better weather. 13:47, 22 February 2013 (UTC) - Nixon and other Pre-Reagan Republicans are why I'm a Republican. If anyone cares. --Token Conservative (talk) 16:30, 22 February 2013 (UTC) - My experience in native american studies taught me that Nixon was the best president ever and Jackson is Satan incarnate. --MikallakiM 17:34, 22 February 2013 (UTC) - Nixon? Really, Hamilton? The guy who used his executive powers to run cloak and dagger campaigns against his political enemies?--"Shut up, Brx." 17:35, 22 February 2013 (UTC) - They all do it, Nixon just got caught. Read about his domestic policies. Man did all right.--Token Conservative (talk) 17:47, 22 February 2013 (UTC) - How about foreign policy? Nebuchadnezzar (talk) 22:27, 22 February 2013 (UTC) - You think Nixon was the first to extend a war beyond its legal boundaries?--Token Conservative (talk) 23:10, 22 February 2013 (UTC) - Certainly you'd be hard pressed to find a president who wasn't an evil fucking megalomaniac, and granted Nixon was really only defrocked in such a manner because he went too far (by directing the power of the state against the Democratic party, rather than just the usual union members, anti-war protesters, and radical leftist groups), but even so, I'm sure you could find a better role model to emulate if you tried. Q0 (talk) 10:41, 24 February 2013 (UTC) - Nixon is not my major political role model, but he's a solid third string man and I feel he gets more flak then he deserves. My real name does not include "Hamilton" in it at all, and my username is actually a reference to Alexander Hamilton, who along with the Federalist Party, both of the Roosevelt's, Eisenhower, Edmund Burke, and Benjamin Disraeli are my major influences.--Token Conservative (talk) 21:42, 25 February 2013 (UTC) - If the only source Wikipedia has for this is the Fonzi of Freedom I'd take it with a grain of salt. Reason has been on this "Bush was a socialist" tangent ever since Obama was elected. Osaka Sun (talk) 21:12, 22 February 2013 (UTC) - Yeah, I've seen this attitude a lot. Apparently authoritarianism is a left-wing trait. Any Republican trying to mind your business isn't really a Republican at all. --TheLateGatsby (talk) 14:33, 25 February 2013 (UTC) Slow as balls.[edit] It's taking pages an average of 10-15 seconds to load for me -- the rest of the internet is moving at a much faster clip. what gives? Nailed a retread to my feet and prayed for better weather. 17:58, 23 February 2013 (UTC) - This site? Should be fast-ish with Squid doing the work. (I can say that our vandal didn't quite go away and we continued to get DOS-level hit rates from them, which does slow stuff down a bit.) I just centre-clicked Special:Random five times quickly; first three loaded in < 3 sec, last two in under 10 - David Gerard (talk) 19:02, 23 February 2013 (UTC) - Yah, it's better now -- maybe I was editing during a backup or sump'n. Nailed a retread to my feet and prayed for better weather. 20:03, 23 February 2013 (UTC) - Sometimes it can be caused by the internet on your end. Though if other websites work fine at the same time, that probably isn't it. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 01:00, 26 February 2013 (UTC) This theologian is quite sophisticated.[edit] Just plugging a religious guy behaving like a rational human being. Move on. Just like New York City; Just like Jerico. 02:09, 25 February 2013 (UTC) - Even the guy who believed in an invisible man in the sky thought it was insane. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 00:55, 26 February 2013 (UTC) Cardinal Keith O'Brien &Sexual hypocrisy[edit] - ↑ UK's top cardinal accused of 'inappropriate acts' by priests - ↑ Top British Cardinal Faces Accusations of Committing ‘Inappropriate Acts’ - ↑ Britain's Cardinal Keith O'Brien to resign - ↑ Cardinal Keith O'Brien resigns as Archbishop Proxima Centauri (talk) 15:23, 25 February 2013 (UTC) This should motivate you to encourage girls to be more masculine[edit] Traveling to New Haven this Saturday[edit] ...It will be my first time in the city. Does anyone who's been/lived there have recommendations as to where I should go? Good historical sites or bars would be most pertinent to my interests, but I'm grateful for any advice. --TheLateGatsby (talk) 20:12, 25 February 2013 (UTC) astorehouse of knowledge server error 500 ?[edit] Has Philip finally pulled the plug ? Hamster (talk) 06:16, 25 February 2013 (UTC) - Entirely offline from where I'm sitting. Wouldn't be terribly surprising, nobody was editing the thing. Poor old PJR, he had such high hopes for his choo-choopedia. --JeevesMkII The gentleman's gentleman at the other site 19:58, 25 February 2013 (UTC) - Same here. A whois check shows PJR's domain registration is still valid. Does anyone know if PJR has any other blogs or whatnot online? --Llegar a las estrellas¿Dígame? 20:18, 25 February 2013 (UTC) - Wurks. sterilesporadic heavy hitter 00:52, 26 February 2013 (UTC) - God damn it Proxima is editing it. Why can't people just let the thing die? Is editing your 17th wiki really that fucking important, Proxima? 01:04, 26 February 2013 (UTC) - It may yet become the unstoppable creationist freight train that obliterates the decrepit Toyota of atheism stuck on the level crossing! --Llegar a las estrellas¿Dígame? 22:41, 26 February 2013 (UTC) A really weird question[edit] Okay - I have really bad eyesight, and always have. Without my specs I can't see the top letter on the old-fashioned optician boards, but unless somebody else also has eyesight that bad that can't really grasp what my vision is like when I don't have my specs on. So I was wondering, is there a website or a programme where I could type my prescription for instance, or could adjust the vision on the screen, so that the end result is a picture or video on the screen that is similar to what I see when I'm not wearing my specs? Just curious, but I think it would help other people grasp just how friggin' difficult my life can get when my specs get steamed up, or rained on, or broken by some twat who isn't looking where they're going.-- Jabba de Chops 00:11, 26 February 2013 (UTC) - I used to have about a minus seven spherical correction. Now there is some astigmatism in the mix as well. Want to go sightseeing with me in a hired Cessna? It could happen, you know, and I get to sit in the left seat. This is about how I see your comment without me specs: - It's a 12x12 pixel Gaussian blur applied in one go, by the GIMP. Sprocket J Cogswell (talk) 00:26, 26 February 2013 (UTC) - Ah, Guassian blur. Didn't think of that. Hang on, I'll see what I can come up with.-- Jabba de Chops 01:38, 26 February 2013 (UTC) - That's about what the monitor looks like without specs. This could explain why I didn't learn to read until I got my eyes tested properly.-- Jabba de Chops 01:52, 26 February 2013 (UTC) - A tool like that would be really useful, but it would need to take into account distance as well. My vision at normal screen-reading distance looks like Cogswell's image, but I don't have to move back too far to make it look like yours. - My usual strategy for convincing people that I have serious trouble without glasses is to let them try mine on for themselves—not a particularly accurate method, but the pain tends to distract them from thinking about that. Peter mqzp 02:01, 26 February 2013 (UTC) - The problem with having people try my specs on, apart from greasy fingerprints all over the lenses, is that people think what they see with your specs on is what you see when you aren't wearing them, and then you find yourself explaining that's not how lenses work.-- Jabba de Chops 13:13, 26 February 2013 (UTC) - I noticed the other day that my car window with about 2-3 mm of frost on it was a good approximation for how well I see without my glasses. I could see light and colors, and maybe the outline of big things, sort of. That requires the right conditions, though, so not as easy as the Gaussian blur above. Mcnamara12 (talk) 03:46, 26 February 2013 (UTC) Berlusconi screws Europe over[edit] Crazification factor achieved! Osaka Sun (talk) 04:03, 26 February 2013 (UTC) - You know its bad when a wannabe Caesar could walk in and likely be BETTER then anyone else. --Revolverman (talk) 04:13, 26 February 2013 (UTC) The clowns are taking over... where is Stephen King when you need him? 171.33.222.26 (talk) 18:12, 26 February 2013 (UTC) Win an Oscar, file for bankruptcy[edit] Not really appropriate for WIGO, but a few folks around here are also in the film/TV biz, so I thought I'd post this. The company that won the VFX Oscar for Life of Pi has had to file for bankruptcy. Meanwhile there was a protest by VFX designers at the Oscars which got no media coverage at all. Life of Pi VFX Controversy. VOXHUMANA 22:41, 26 February 2013 (UTC) - Ah yes, more evidence that the free market always knows how best to manage society's resources. Uke Blue 00:09, 27 February 2013 (UTC) BOC's new Patriarch and other fun stuff at home[edit] The commie-era patriarch of the Bulgarian Orthodox Church died last year, and they finally elected a new one today. It was on TV all day, and a lot of people, politicians included, took turns to (metaphorically) fellate the BOC, the process, and the new Patriarch. It's puke-inducing. :( I need to look up the new guy and see if he had said anything stupid or at least RW-relevant. The BOC seems to have sorted out its internal affairs and for the last few years, it has been trying to regain ground lost during Communism. It's not a happy prospect - they've been pushing for religious classes at schools (together with the Muslim clerics - yeah, unlikely bedfellows) and some priests protested the last Pride shoulder to shoulder with the skinheads. :( On related note, a few days ago the government resigned, which means that the elections this year will be sooner, and there have been somewhat large protests in all major cities, originally triggered by... the high electricity bills for January. They are Occupy/Tea Party-style protests - a heterogeneous mixture of people vaguely dissatisfied with the status quo for various reasons. A few nutjobs are pushing for changes in the constitution. If it happens, I'm buying a gun. :( --ZooGuard (talk) 18:25, 24 February 2013 (UTC) - Yes, yes, all that's very well, but we need to know more about what's happening in the chalga scene. The worst pop music genre ever is not an achievement to toss aside lightly - David Gerard (talk) 19:06, 24 February 2013 (UTC) - Did I mention that some of the people pushing for changes in the constitution are Members of the Parliament, not random people on the street? No? - Anyway, I don't know if it counts as "what's happening", but here's a 4-year-old rap/chalga/manele crossover. I think it may have been intended as a parody. Not that its usual audience cares...--ZooGuard (talk) 19:50, 24 February 2013 (UTC) - With outfits from the Tijuana Brass! Truly, your national art form is a thing of beauty and wonder that would survive nuking from orbit - David Gerard (talk) 20:14, 24 February 2013 (UTC) - <DumbAmerican>Isn't Bulgaria a part of the EU? Doesn't that mean you can move to other countries relatively pain free?</DumbAmerican>--Token Conservative (talk) 21:34, 25 February 2013 (UTC) - See wp:Schengen Area, especially the "prospective members" section. See also the hissy fit thrown by UK politicians when faced with the prospect of immigration from Bulgaria and Romania. - And I'm not terribly interested in "moving". In additions to other reasons, my country has suffered a massive brain drain post-1989, I don't see how I can help by contributing to it.--ZooGuard (talk) 10:10, 26 February 2013 (UTC) - You seem pretty upset about the BOC and the events of the government to the point where it seems like you're getting ready for a violent revolution/breakdown of central authority. Leaving wont help your country, and I'll applaud you for your desire to help your country, but unless you are in a position to do something, it might be a good idea to leave, if only until the transition is done.--Token Conservative (talk) 01:03, 27 February 2013 (UTC) - Nope, you are reading too much into my tone, which was a bit hyperbolic (though Bulgarians are known to be a bit pessimistic.) And in case there is a language issue: "the government" = "the cabinet of ministers" that holds the executive power. (The ex-Prime Minister is currently in hospital due to very high blood pressure. :D) - My concern about the BOC is it recovering from "mostly harmless" to US-social-conservative levels of influence in politics and society. :) Or worse, Russian-OC-levels. :( - As for the "violent revolution" and "breakdown of central authority", we are far away from it. And unlike the wankers in the US, I have some idea what's to live in a failing state. (The 90s economic crisis - it's not "marauding gangs of cannibals", but "thicknecks collecting 'protection' money", "the police being even more useless than now" and "politicians/crime bosses getting assassinated in broad daylight".) How serious is the current kerfuffle is unclear and it may amount to just "another amusing footnote in our crazy history". In the short term, the possible negative impacts are prolonging the current economic troubles and the negotiations with the EU (worst case - dropping out of the EU, which would be really, really stupid - not that some people are not advocating for it...). In the long term, something to fear is a slow devolution towards something like Putin's Russia or Belarus, but that's stoppable with the usual mechanisms of a civil society. And by not making dumb changes to the constitution...--ZooGuard (talk) 10:20, 27 February 2013 (UTC) Why I have lost almost all respect for The Onion[edit] Edgy humor is one thing, but you don't call a nine year old girl a cunt. Ever. MDB (the MD is for Maryland, the B is for Bear) 12:59, 25 February 2013 (UTC) - Baffles me. The thing is, this isn't even satire, it's not a joke. Onion humour would be perfectly along the lines of "She's a brat", but this gets rid of the joke, now just the word "cunt" is the joke. It's like PewDiePie screaming "rape" for every word it can be substituted for or the ten-year-old yelling FART on the playground. I expected better from the Onion. If we're supposed to accept just using the word "cunt" is 'joke' then "A jew walks into a bar" would have to be one too. Polite Timesplitter come shout at me for being thick 14:25, 25 February 2013 (UTC) - Yeah, sometimes the Onion posts without thinking. I see they took the tweet down, so hopefully some douche had his knuckles rapped. --PsyGremlinSnakk! 14:27, 25 February 2013 (UTC) - You'll occasionally hear about radio shock jocks being fired for going too far. For instance, DC's "Greaseman" losing his job for an incredibly tasteless racial joke. - Any idiot could get on the radio (or Twitter) and be offensive. True entertainment and art comes when you know just how far you can push things, and never ever going any further. The Onion crossed the line this time. (Flew over it with jetpack, actually.) MDB (the MD is for Maryland, the B is for Bear) 14:59, 25 February 2013 (UTC) They've apologized. Though mostly to the young actress, not really for the word used. MDB (the MD is for Maryland, the B is for Bear) 16:58, 25 February 2013 (UTC) - Apokalyps2547 (talk) 17:48, 25 February 2013 (UTC) Clearly it was supposed to be "can't." @Timesplitter– Because if men hear worlds like "cunt," they'll suddenly go out and rape women. Seriously, enough is enough with the psychology subconscious woo. "It creates underlying messages that..." "The reason you can't see any evidence for this is because it's all "internalized" and so you do it without thinking." FUUUUUUUCK! ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 18:24, 25 February 2013 (UTC) - What the fuck are you going on about? TyJFBAA 18:44, 25 February 2013 (UTC) - I don't think that's what Timesplitter was saying at all, actually. In fact, I don't see "underlying messages" anywhere in his post. Apokalyps2547 (talk) 19:28, 25 February 2013 (UTC) - Inquisitor, maybe you shouldn't mention rape so often, especially when nobody else is talking about it. Seriously, it's weird. Wėąṣėḷőįď Methinks it is a Weasel 19:33, 25 February 2013 (UTC) - Seems we're all wrong and it is funny: BOLLOCKS Mr Kirshen Scream!! (talk) 23:52, 25 February 2013 (UTC) - I get sick of repeating myself too. At least I don't have weird views on rape and watch Law and Order SVU all the time. I lived with someone who did that, he was... interesting. And yes, I realize I totally derailed that. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 00:53, 26 February 2013 (UTC) - I keep dreaming that some day we'll move on from our perpetual rage society and stop getting horribly offended at every misfired joke, but then I wake up. SirChuckBHITWIN FOR PRESIDENT! 00:52, 26 February 2013 (UTC) - Chuck, I am very rarely enraged but calling a 9 year old girl (or boy, for that matter) a cunt on twitter where the world is reading cannot surely be amusing, however ironically meant. Scream!! (talk) 00:59, 26 February 2013 (UTC) - I find the entire thought of anger over a word choice hilarious, because The Onion as a newspaper has had far more "offensive" material in print for years. Off the top of my head, there was one headline that said something along the lines of "experts admit donating money Cystic Fibrosis stupid, wasteful." I guess laughing at dead kids is funny, just don't call them dirty names. SirChuckBPlease Excuse me, I have to go out and hunt giraffes 01:19, 26 February 2013 (UTC) - Quite honestly I don't think that I have ever even smiled at anything in the Onion. I think it's 3rd form humour (14 year olds for the Americans).Scream!! (talk) 01:26, 26 February 2013 (UTC) - You are more than welcome to believe that. But lets not pretend like this was some outrageous event that forces our hand into action. The Onion is known for this type of humor. Some people enjoy it, some don't, just like every other form of humor on the planet. How is this any worse than most stand up comics? SirChuckBGentoo Penguins is the best kind of Penguin 01:30, 26 February 2013 (UTC) - I think calling a nine-year-old child a "cunt" would be beyond the pale for most comics--but I don't frequent the comedy clubs, so YMMV. Just like New York City; Just like Jericho. 01:39, 26 February 2013 (UTC) - Yup! Scream!! (talk) 02:14, 26 February 2013 (UTC) - You should check out your local comedy clubs, since the rise of the "Edgy comics" (Silverman, Griffin, Cook), this type of comedy is not even remotely out of place.... I'd add in that it's just an extension of the Carlin/Murphy/Pryor line of edgy comics.... This is not new, nor is it newsworthy and more people have heard about/learned of the quote do to the insane overreaction than ever would have known about it without the clutching of pearls. SirChuckBGentoo Penguins is the best kind of Penguin 03:15, 26 February 2013 (UTC) - "You should check out your local comedy club..." If what I'll see there is people hurling obscenities in the direction of children, then no, no I shouldn't. It sounds like a horrible place. Just like New York City; Just like Jericho. 03:19, 26 February 2013 (UTC) So you admit you don't know the culture, you refuse to investigate the culture because you don't think you'll like what you hear, but you still express outrage when the culture crosses into your world and demand they adhere to your rules of what's funny/decent/acceptable.... SirChuckBGentoo Penguins is the best kind of Penguin 03:36, 26 February 2013 (UTC) - Well, here is the sum total of all I know about the culture, at least from how it's described in this thread: it's a place where calling a child a "cunt" is acceptable. There's nothing in there that really draws me in, y'know? Sorry. Just like New York City; Just like Jericho. 04:20, 26 February 2013 (UTC) Not just the Onion[edit] A few different people are calling out the Oscars for wallowing in sexism and racism. Just like New York City; Just like Jericho. 23:56, 25 February 2013 (UTC) - Apart from a few gags, MacFarlane was horrific. It seems they wanted to pull a Ricky Gervais at the Golden Globes - the difference being that Gervais was actually funny. Osaka Sun (talk) 00:09, 26 February 2013 (UTC) - Unique experience for Gervais then. Scream!! (talk) 00:26, 26 February 2013 (UTC) - I for one am shocked, shocked, SHOCKED that an event devoted to critiquing every little aspect of a performers hair, make up and wardrobe choices would have Seth Mcfarlane as a host. SirChuckBOne of those deceitful Liberals Schlafly warned you about 00:52, 26 February 2013 (UTC) - Yeah MacFarlane was living up to his standards as a horrifically unfunny fuckhead (and he's been an unfunny fuckhead for most of his career). But it is rather ridiculous that people are calling him "too edgy". MacFarlane is far from "edgy," he's mundane, in that most of his work fits squarely into the prevailing systems of sexism, racism, LGBTphobia with which we're faced every day. "Edgy" my ass. Seme Blue 05:03, 26 February 2013 (UTC) - Yup. Just like New York City; Just like Jericho. 05:24, 26 February 2013 (UTC) Why is the above offensive and unfunny when south park seems to get a free pass for similar shit? AMassiveGay (talk) 15:17, 26 February 2013 (UTC) - South Park gets called out for racism, sexism, obscenity, etc. on a fairly regular basis. Just like New York City; Just like Jericho. 15:32, 26 February 2013 (UTC) - Last time I criticized South Park here, I was told I was taking the show too seriously--"Shut up, Brx." 15:33, 26 February 2013 (UTC) - Also, this is the freakin'Academy Awards, ie. one of the biggest TV events of the year. A big step up from South Park. Just like New York City; Just like Jericho. 15:35, 26 February 2013 (UTC) - I was speaking about more specifically here. I couldn't give a fuck about industry back slapping.AMassiveGay (talk) 16:46, 26 February 2013 (UTC) - The judgement of something as "unfunny" is essentially worthless, like when people who don't like a particular work say it's "not art". But for one thing South Park's humour isn't pretending to be anything beyond juvenile. If that's what the Academy was aiming for at the Oscars then I suppose MacFarlane delivered. Tialaramex (talk) 20:48, 26 February 2013 (UTC) - Juvenile is one thing. The articles linked above are more concerned with sexism and racism. Those aren't necessarily synonymous. Just like New York City; Just like Jericho. 21:05, 26 February 2013 (UTC) - So what is the difference between this and Rick Gervais, who performed material that was just as "offensive," yet not remotely funny, and Mcfarlane? Why does one get a complete pass (at least on this site) and the other gets pilloried? SirChuckBDMorris for new Jinx! 21:29, 26 February 2013 (UTC) - "Looking at all the wonderful faces here today reminds me of the great work that’s been done this year… by cosmetic surgeons!" - "[Eddie Murphy] walked out on the [Oscars]. Good for him. When the man who says ‘yes’ to Norbit says ‘no’ to you, you know you’re in trouble." - "Boardwalk Empire is about a load of immigrants who came to Amercia about a hundred years ago and they got involved in bribery and corruption, and they worked themselves up into high society. But enough about the Hollywood Foreign Press...they do an awful lot for charity and they're a nonprofit organization. Just like NBC." - "Our next presenters are two of the funniest people in America. She stole the show on SNL killing a cash cow for both of us. Please welcome the wonderful Tina Fey and the ungrateful Steve Carell!" - "Thank you God, for making me an atheist." - That's "offensive," but not "We Saw Your Boobs," Adele fat jokes and "I always thought the actor who got most inside Lincoln's head was John Wilkes Booth. Osaka Sun (talk) 22:42, 26 February 2013 (UTC) - "The Golden Globes are to the Oscars what Kim Kardashian is to Kate Middleton. Bit louder. Bit trashier. Bit drunker. And more easily bought... allegedly." - "I love Eddie Murphy. He loves dressing up. He's versatile. Bit of trivia. Eddie Murphy and Adam Sandler, between them, played all the parts in The Help. Brilliant." - "Our next presenter is the Queen of Pop...No, not you Elton (John). She’s all woman. She’s always vogue, she’s a Material Girl and she’s just like a virgin. Please welcome Madonna." - "Justin Bieber nearly had to take a paternity test. What a waste of a test that would have been... The only way that he could have impregnated a girl was if he borrowed one of Martha Stewart's old turkey basters." - Also, from 2011 - "I was sure the Golden Globe for special effects would go to the team that airbrushed that (Sex and the City 2) poster." - "(I Love You Phillip Morris is about} two heterosexual actors pretending to be gay — so the complete opposite of some famous Scientologists, then." - Lets be honest here. Gervais gets a pass because users like him, but they don't care for Mcfarlane. SirChuckBCall the FBI 23:10, 26 February 2013 (UTC) - Alright, so Gervais was just as bad as McFarlane. If the argument that some people want to advance is that Hollywood reflects a culture that's got a problem with sexism/racism/misogyny/homophobia/transphobia, I don't know how interesting or productive it is to say "this guy is just as bad as/worse that that guy." Maybe the focus should be less on the individuals and more on the hate they're each expressing? Just like New York City; Just like Jericho. 23:22, 26 February 2013 (UTC) - That's not my argument at all.... I'm saying it's hypocritical of people to scream about how awful/sexist/racist/whateverist Mcfarlane was while giving a complete pass to Ricky Gervais for an act that was equal. I don't think either was particularly funny, but humor is subjective. SirChuckBPenguin Knight, First Class 00:04, 27 February 2013 (UTC) - the jokes above are offensive? You Americans are such sensitive dears AMassiveGay (talk) 00:28, 27 February 2013 (UTC) - I don't think they're offensive, but then It's nearly impossible to offend me with a joke... I thought the Onion joke was actually a funny idea, just poor execution. SirChuckBA product of Affirmative Action 00:38, 27 February 2013 (UTC) - How would you have executed it? Just like New York City; Just like Jericho. 00:39, 27 February 2013 (UTC) - Gervais was seen as offensive not because of any sexist or racial jokes, but rather because his jokes were mean-spirited. For example, he made fun of Robert Downey jr's drug habit.--"Shut up, Brx." 01:14, 27 February 2013 (UTC) - oh those poor oppressed Hollywood celebrities AMassiveGay (talk) 01:40, 27 February 2013 (UTC) - Who really watches an award show anyway? ~smirk~ (Answer: Apparently a lot of people on RW.) EnlightenmentLiberal (talk) 01:47, 27 February 2013 (UTC) - To be honest, it's not really my type of joke (I'm more of a observation humor guy) but I would have tried harder to emphasize the intent of mocking the entire culture surrounding the Oscars, rather than the girl. I'm also not big into shock comedy, so I wouldn't have used cunt for the punchline.... Speaking hyptothetically, I'd have said something along the lines of "Why am I seeing all this fawning over Quvenzhane Wallis' innocent act? It's so obvious she's a complete bitch in private." SirChuckBBoom Goes the Dynamite 08:05, 27 February 2013 (UTC) Wiki Love[edit] - This discussion was moved here from RationalWiki:Technical support. Seme Blue 06:59, 26 February 2013 (UTC) Could we please add this extension? It's good for wikis with a good community, and the ability to add barnstars for contributions is good to recognize other users. It also has stuff besides just barnstars. You can also make your barnstars and other objects to give by editing a javascript page on the wiki. I'm working on adding medals to the one at Sturmkrieg. I think that this would be a really good extension to add. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 06:32, 26 February 2013 (UTC) - I mean I'd actually love if we installed the WikiLove extension, but knowing the RW community, it'd end up getting used to post penises on talk pages. Regardless, I think we might have some kind of barnstar system looking like this: - Order of the Goat. Highest honor available; awarded for years of positive service to the wiki as a "lifetime achievement." Can be granted posthumously. - Rationalist's Brainstar. Awarded for a history of great service to the wiki's mission, through improving mainspace or promoting and developing the mission beyond it. - Community Brainstar. Awarded for a history of great service to the community, such as diplomatically resolving conflict, providing insight into best practices of operation, and generally keeping the place fun. - Techie's Brainstar. Awarded for a history of great service to the more technical aspects of the wiki. - Pat on the Back. Awarded for any act of great service to the wiki or the community, such as rewriting an important article. - And of course the fluff we already have (and use very occasionally): - The half stale cookie. - The Beerstar. - And of course, anything else we can think of. Uke Blue 06:56, 26 February 2013 (UTC) - We already have a tech one, but I think this is a good idea. TyJFBAA 07:01, 26 February 2013 (UTC) - We should keep the default options, but we could make our own RW category of barnstars. In relation to the concern you expressed, only users who can edit the system messages can control the images that are used in the barnstars. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 07:22, 26 February 2013 (UTC) - Are the default options the same as Wikipedia's barnstars? I couldn't tell. Uke Blue 07:33, 26 February 2013 (UTC) - I think mostly. They've customized their version of the extension through the js page somewhat. ఠ_ఠ Тод Зенос ан форфар фор 07:38, 26 February 2013 (UTC) - Okay, then I don't think most people will go for it. We don't want to import Wikipedia's culture anymore than we have to—we'd rather start smallish and make our own brainstars, I'd expect. Seme Blue 07:41, 26 February 2013 (UTC) - I don't see why we need *stars in the first place. If you're a good user, you will develop good standing with the community, and vice-versa. To me, *stars some seems like a gameificated circle-jerk system and just another source of unnecessary drama. See the award system RWW had for all the evidence you need of that. --CoyoteSans (talk) 23:49, 26 February 2013 (UTC) - It's a system designed to reward and promote doing good stuff, just like most awards on the planet. If you're a good scientist you'll hopefully develop good standing within the scientific community, but that doesn't mean the Nobels are a "gameificated circle-jerk." Uke Blue 00:06, 27 February 2013 (UTC) - Ew. Thankfully, importing any relatively new extensions is questionable with 1.19, and I certainly don't plan to do any upgrading until the next long-term support release (slated to be 1.23) - David Gerard (talk) 08:42, 26 February 2013 (UTC) You can overwrite the existing barnstars and objects with the js page, if it were a problem to use existing barnstars. Though I think completely removing the default options would be an annoyance to a lot of people. ఠ_ఠ Тод Зенос ан форфар фор 06:10, 27 February 2013 (UTC) - We have all this junk which nobody uses lying around in the attic. From what I gather, Wikilove is an extension that makes it quicker to apply such templates, but unless it becomes really fashionable to use them here, I really don't see the point in installing it. I suspect that the reason people don't award barnstars at RW is because RW users generally see them as pointless &it just isn't part of the site culture/dynamic, not because it takes a few seconds or whatever to place the template. Wēāŝēīōīď Methinks it is a Weasel 20:37, 27 February 2013 (UTC) All by my Goat[edit] Alain (talk) 17:43, 26 February 2013 (UTC) - +1 Scream!! (talk) 17:47, 26 February 2013 (UTC) New Conservapaedia, what's it doing?[edit] Look at this. What do you think?--Seonookim (talk) 01:07, 28 February 2013 (UTC) - What's Lumenos been up to lately, anyhow? By the way, see where gets you. Sprocket J Cogswell (talk) 01:22, 28 February 2013 (UTC) - Not sure who "Lumenos" is, so I may be missing the joke, but based on | this, I doubt this is a serious website: There is no such thing in Christendom as a number "zero", which is a fabrication of Islamic Arabs acting under the demonic influence of the false prophet Muhammad. How can zero even exist as it does not stand for anything, but for nothing. Mathematicians who persist in using this nihilist concept should be shunned as heretics, or re-educated in proper Conservative mathematics. The West can, and shall, be victorious through proper Roman numeral based math!--Token Conservative (talk) 02:46, 28 February 2013 (UTC) This is actually pretty good: How do we know Hitler was an atheist? Liberals often like to ask trick questions along the lines of: "If Hitler was an atheist, why did he repeatedly talk about "My Christian faith" and frequently refer to God in mein kampf? There are two problems with this first question. First it is an an example of the typical atheistic trick known as an "argument from evidence".Secondly, it assumes that Hitler told the truth. We know that Hitler was an evil man ready to kill others and as such it would not be surprising if he were to lie as well. Fortunately, although we cannot take Hitler's word about his beliefs there is another much more reliable source - the opinions of other Christians. Many Christians have stated that as they cannot believe that Hitler was a Christian so they don't think he was one. Which is more reliable - the statements of the mass-murderer Hitler, of the statements of many baptised Christians? VOXHUMANA 03:14, 28 February 2013 (UTC) - It's sad that A) that is the argument boiled down that most use to paint Hitler as a Atheist and B) that it works. --Revolverman (talk) 03:24, 28 February 2013 (UTC) - So it's basically random fuckers who are embarrassed about being associated with this guy, or how he describes himself? And no, their argument doesn't work. The word of random f•••ers is not considered a valid historical source. ఠ_ఠ Тод Зенос ан форфар фор 03:26, 28 February 2013 (UTC) - Pretty sure that's all meant to be parody. Just like New York City; Just like Jericho. 03:27, 28 February 2013 (UTC) - That would be good, because it's retarded. People are known to be so stupid that I don't generally assume that idiocy is a parody. ఠ_ఠ Тод Зенос ан форфар фор 03:29, 28 February 2013 (UTC) The commandments are pretty good: Invigilators are reminded to be gentle when chastising editors who are children or of the fairer sex. At least one extra warning is appropriate, and please remember to be firm but authoritative, and to make allowances for hysteria and similar ladylike behaviour.] VOXHUMANA 03:57, 28 February 2013 (UTC) - My sock, the great Parodist, insulted a New Conservapaedia editor personally. Let's see the response down here. --Seonookim (talk) 04:13, 28 February 2013 (UTC) Soledad[edit] So the one TV journalist who has the brains and balls to call politicians out on their bullshit, and CNN cans her. Lowest point in US journalism since embedded journalists. Now we'll be back to vacuous bubble heads asking the important questions - "Where do you buy your suits, Senator?" --PsyGremlinSnakk! 06:03, 28 February 2013 (UTC) March Madness... Vatican Style![edit] (In the interest of full disclosure, I did not come up with "Sweet Sistine." I wish I had, though.) MDB (the MD is for Maryland, the B is for Bear) 17:24, 27 February 2013 (UTC) - Where's a psychic octopus when you need one? --JeevesMkII The gentleman's gentleman at the other site 17:31, 27 February 2013 (UTC) - I only recently discovered XKCD, but this seems highly relevant, for some reason--Token Conservative (talk) 00:09, 1 March 2013 (UTC) ???[edit] An anonymous IP address created and edited my sandbox for their own purposes, writing about a movie I haven't seen. What on earth....? Not sure where to put it, so I went here.--Seonookim (talk) 05:39, 28 February 2013 (UTC) - If it's your sandbox, just rake it. --Revolverman (talk) 05:49, 28 February 2013 (UTC) Politically confused[edit] I suppose this is just not fitting into existing political categories, but it feels like political confusion. I'm generally libertarian on a lot of issues, except health care. On that I think the government should pay for all of it, and higher taxes for it aren't a problem. To avoid burden to the working and middle class, I'm alright with taxing the 1% more. In fact, I bet we could have only the 1% pay taxes to fund universal health care, and still leave them with a lot of money. For things like cosmetic surgery, I only think the government should pay for it if there's actually a problem with the person's appearance. They don't have to have an actual defect, but if they're fugly they should be able to have it fixed. However, if someone is some super fussy popular culture type who wants to change their personal appearance based on a fad, therapy would probably be a better source of government funding for them. ఠ_ఠ Тод Зенос ан форфар фор 02:42, 1 March 2013 (UTC) - There may be a position for you. It's called "classical liberalism", aka the liberalism of the European Enlightenment. Read John Stuart Mill as a start. And ask yourself, do you want to pursue policies that achieve a happy, materially wealthy population, and policies that preserve, maintain, and increase self determination, with maybe a bent towards small government as a consequence of scientific realities of how markets and governments function? Then you might be a classical liberal. The difference is realizing and accepting that sometimes compelling people to do stuff for the common good is the best plan, such as in the case of vaccines, welfare, etc. Libertarians simply don't recognize the mere existence of positive externalities, or decide that government action to help in those cases is always worse off, or they care more about the rules of libertarianism than achieving a happy, materially wealthy, free society. EnlightenmentLiberal (talk) 03:21, 1 March 2013 (UTC) - I've thought about that. Generally I support capitalism, although it needs to be regulated because people will do extremely immoral things in the pursuit of profit. I think that the libertarian argument that without government intervention, consumers will regulate businesses (if companies sell unsafe products w/o govt regulation, people will stop buying from the company and put them out of business), is correct, but it won't stop extremely unethical and inhumane treatment of workers because such things are done to maximize profits and lower prices, and with lower prices, natural events will drive more business to that company, which would completely fail to stop its problematic behavior. I also don't really support a lot of other government programs. A lot of things can be privately done. The government can encourage beneficial private programs through tax reduction and exemptions. ఠ_ఠ Тод Зенос ан форфар фор 04:15, 1 March 2013 (UTC) mw:Extension:Widgets[edit] I think this could be a useful extension, since it creates a namespace with raw HTML enabled that's only editable by authorized users with the proper permissions. It also allows user input into the templates, so it's useful for making embedding templates. I've also written a guide on MediaWiki.org for embedding and thumbnailing external images just like wiki-uploaded images. ఠ_ఠ Тод Зенос ан форфар фор 02:49, 1 March 2013 (UTC) The death knell of the F-35?[edit] Canada has been about the final holdout for the F-35 and now Boeing is opening courting Canada in moving its support to its new Super Hornets. I think the F-35 is a great example on how even with all the money pumped into it, the US Military is falling apart due to corruption and waste. The F-22 has underperformed and the F-35 might just be a lemon. I'm half temped to make a page on the F-35 as an example of how the Military-Industrial Complex doesn't even make good Military-Industrial products. --Revolverman (talk) 03:31, 28 February 2013 (UTC) - Who gives the slightest fuck? They could be flying turboprop airbuses with a machine gun strapped on the back for all the resistance they're ever going to actually fly against. Any power that can actually field a real airforce is also likely to be fielding ICBMs, rendering the ability to engage in Top Gun style heroics supremely irrelevant. At least here in Britain we have the excuse that Argentina might decide they want the Falkland islands back again at any moment to justify this crap (incidentally guys, now might be a good time. Fuck knows what we'd have to dig up to fly from our carriers at the moment.) This shit is just corporate welfare. --JeevesMkII The gentleman's gentleman at the other site 03:54, 28 February 2013 (UTC) - Hey, don't ask me. Ask the Pentagon Generals who feel that the cool multi-billion dollar jets are so necessary, but not proper body armor or anti-rocket and anti-IEDs for their vehicles. --Revolverman (talk) 04:09, 28 February 2013 (UTC) - The main task of F-35 is to go against cheap yet advanced Russian SAM systems (w:S-300, w:S-400). Many shitholes have a few those. --Henk (talk) 06:58, 28 February 2013 (UTC) - Anybody who has had anything whatsoever to do with things like the F-22 or the F-35 (I have more intimate knowledge of those programs than most) knows that they have been a waste of money. They are a hearken to the Cold War arms race and fears of air-to-air combat missions, and nothing else. The funniest part is that the F-35 was supposed to be the cheaper version of the F-22, but has actually cost more per capita than any other military project ever made. Moreover, most defense tech geeks will tell you that, with the advent of drone aircraft, building manned fighter jets is akin to building wooden warships with broadsides. Reckless Noise Symphony (talk) 09:27, 28 February 2013 (UTC) - The "holy hyper expensive" claims I have seen are due to just creative accounting. There are so many dishonest ways to compare aircraft cost or cherry pick development troubles. Of course it's over budget but that's just every American military project. --Henk (talk) 11:18, 28 February 2013 (UTC) - ... Creative accounting? So they are faking that the F-35 is massively over-budget? --Revolverman (talk) 17:17, 28 February 2013 (UTC) - Creative accounting is about moving the money about. For example, rather than say Music Label X made a $500 000 profit on your album, the company says they paid $500 000 extra to Subsidiary Y for marketing expenses. You made nothing (so no percentage for you), but Subsidiary Y are doing great! In the military it sometimes happens that you don't want to admit Programme X exists, or how expensive it was, but everybody knows about very expensive project Y that you're working on, so you hide the budget for X in the project Y budget. Could that have happened with the F-35? Sure. Can we prove it? Probably not. Tialaramex (talk) 12:57, 1 March 2013 (UTC) So.[edit] Aradia, Nepeta, Terezi, Vriska, Kanaya, Feferi, Karkat, Sollux, Tavros, Gamzee, Equius, or Eridan? (This message brought to you by His Reasonableness The High Chancellor Eddie Monah, Champion of Rationality. Sing songs of praise to me or simply worship my genius.) 08:21, 1 March 2013 (UTC) - Aradia is my patron, I cosplay as Vriska, and Karkat is by far the best. Uke Blue 09:39, 1 March 2013 (UTC) - I feel like I am missing something very important right now...--Token Conservative (talk) 22:30, 1 March 2013 (UTC) - Only the Ulysses of the Internet - David Gerard (talk) 22:44, 1 March 2013 (UTC) Need Help Identifying Jacket[edit] apologies for copying the same thing here and WP's reference desk, am hoping that some awesome person out there has an idea what this is :) I recently went on holidays and I lost a jacket which was loaned to me by a friend. I feel particularly bad about it as I promised I wouldn't lose it, so not only do I want to replace it but I want to have a replacement ready by the time I tell him it's lost. The good news is that he's uncontactable for the next few weeks, so I have plenty of time to buy and ship a new jacket. The bad news is that I have absolutely no idea what brand the jacket is. The only information I have is: - these photos of me in it - Jacket Photos - it was gifted to him by family in Shanghai - it had gold Chinese writing on the inside underneath the collar Does anyone have any idea how I can find out what brand this jacket is? Thanks! RyanC (talk) 12:02, 28 February 2013 (UTC) - No. If you were wearing it, did you never notice a brand label in it? If it was bought in China, it may well not be a brand known in the west anyway. There are tons of clothes for sale from China via eBay, so you could try searching there. But I think it's unlikely you'll come across an exact match without knowing the brand, so you'll probably need to settle for replacing it with something comparable. Wėąṣėḷőįď Methinks it is a Weasel 13:18, 28 February 2013 (UTC) - Buy one as similar as possible to the original and say this is how it came back from the dry cleaner. SophieWilder 13:39, 28 February 2013 (UTC) - Someone lends you a jacket which you promise you won't lose and then you go and lose it? WTF? Well, first of all the pictures you posted of a black jacket are not particularly helpful; I've no clue what material it is. The best thing you can do is own up and admit that you are an untrustworthy loser. ГенгисYou have the right to be offended; and I have the right to offend you. 09:04, 1 March 2013 (UTC) - I believe that is either by Brunello Cucinelli or Kiton. Don't worry, they are cheap and easy to replace. Tielec01 (talk) 09:48, 1 March 2013 (UTC) - Haha I didn't lose it per se, the bus company lost the bag it was in when they were transferring luggage between coaches. They agreed to pay for it, but given they're 10,000km away and barely speak English I don't have high hopes. I feel bad though as it was my responsibility and I should have kept it with me. - I'm not sure what the material was, beyond that it felt smooth. The jacket was a down jacket (contains duck feathers), I was hoping that it would be a famous brand or a rip off of a common design (my fashion knowledge is almost none...). - Thanks for the suggestions, I'll look into them. Seems like this will be harder than I thought RyanC (talk) 11:39, 2 March 2013 (UTC)
https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive194
CC-MAIN-2019-13
refinedweb
20,307
70.02
Several times, I got a request to build an application to support a sportsclub (e.g. table tennis) to keep track of / administer the competition progress. To do this in a correct matter, games must be generated automatically when a player has been added to a competition. Of course, this can be automated and this is explained in this article by using Linq. The Cartesian product is the mathematical way to solve this but, it generates also product with the same player which is not possible. e.g. Player A versus Player A and it also generates duplicates Player A versus Player B and Player B versus Player A which is the same game. So the generated list must be filtered. The filtering is done by using a unique id per player and only add the last/second reference of a duplicate. Two classes are used to store player data and the resulting game data: public class Player { Public String Name {get; private set;} Public int ID {get; private set;} public Player(String name, int id) { Name = name; ID = id; } } public class Game { public String PlayerA { get; private set; } public String PlayerB { get; private set; } public Game(String A, String B) { PlayerA = A; PlayerB = B; } } To use these classes, the following code is used in the Form1.cs: private List<Player> Names = new List<Player>(); private List<Game> Games = new List<Game>(); public Form1() { InitializeComponent(); Names.Add(new Player("Andre",1)); Names.Add(new Player("Luke", 2)); Names.Add(new Player("Des", 3)); Names.Add(new Player("Patrick", 4)); dataGridView1.DataSource = Names; dataGridView2.DataSource = Games; } private void Generate() { Games.Clear(); //Cartesian product var query = Names.SelectMany(x => Names, (x, y) => new { x, y }); //convert to list of games foreach (var q in query) { // a game between the same players is not added nor // a duplicate game with both names exchanged (e.g. a vs b and b vs a) if ((q.x.Name != q.y.Name)&& (q.y.ID > q.x.ID)) Games.Add(new Game(q.x.Name, q.y.Name)); } //ordered //Games.OrderBy(x=>x.PlayerA); //Alphabetical sorted Games.Sort((x, y) => string.Compare(x.PlayerA, y.PlayerA)); } The form is initialized with 4 players (and shown in dataGridView1). By adding the names, dataGridView1 fires the event "Row Added" which on its turn calls Generate(). dataGridView1 dataGridView1 Generate() The Generate function clears the list, generates the Cartesian product list with 15 results based on 4 players. After obtaining the Cartesian product list, each result is filtered and added as a new game to the game list. Filtering is done when the game has no duplicate players and it's the second result. (Case A vs B and B vs A) Generate And since the Game list is databound to dataGridView2, the view is automatically updated and shows the games to play. dataGridView2 From an educational point of view, the Cartesian product is the solution but to do it with Linq was something to find.
http://www.codeproject.com/Tips/814261/Generate-game-list-based-on-list-of-players-in-Csh
CC-MAIN-2016-18
refinedweb
496
64.51
Non-Programmer's Tutorial for Python 3/Dealing with the imperfect Contents ...or how to handle errors[edit] closing files with with[edit] We use the "with" statement to open and close files.[1][2] with open("in_test.txt", "rt") as in_file: with open("out_test.txt", "wt") as out_file: text = in_file.read() data = parse(text) results = encode(data) out_file.write(results) print( "All done." ) If some sort of error happens anywhere in this code (one of the files is inaccessible, the parse() function chokes on corrupt data, etc.) the "with" statements guarantee that all the files will eventually be properly closed. Closing a file just means that the file is "cleaned up" and "released" by our program so that it can be used in another program. catching errors with try[edit](input("Enter a number: ")) print("You entered:", number) Notice how when you enter @#& it outputs something like: Traceback (most recent call last): File "try_less.py", line 4, in <module> number = int(input("Enter a number: ")) ValueError: invalid literal for int() with base 10: '\\@#&' As you can see the int() function is unhappy with the number @#& (as well it should be). The last line shows what the problem is; Python found a ValueError. How can our program deal with this? What we do is first: put the place where errors may occur in a try block, and second: tell Python how we want ValueErrors handled. The following program does this: print("Type Control C or -1 to exit") number = 1 while number != -1: try: number = int. Exercises[edit] Update at least the phone numbers program (in section Dictionaries) so it doesn't crash if a user doesn't enter any data at the menu. def print_menu(): print('1. Print Phone Numbers') print('2. Add a Phone Number') print('3. Remove a Phone Number') print('4. Lookup a Phone Number') print('5. Quit') print() numbers = {} menu_choice = 0 print_menu() while menu_choice != 5: try:() except ValueError: print("That was not a number.")
https://en.wikibooks.org/wiki/Non-Programmer%27s_Tutorial_for_Python_3/Dealing_with_the_imperfect
CC-MAIN-2017-39
refinedweb
329
67.96
I do remember that on more than one instance I have had that problem. During a project - on a schedule. As it were always one of many problems I solved it and now of course do not remember the particular details. However, in my experience it was usually a problem with common method names. Most prominent example is: toString(); If you go to the implementation of toString() for integer than you will find that it uses a argument (radix). However String.toString doesn't and Object.toString() as well. toString is not officially part of an "Object" signature. It gets resolved by using the (still existing) prototype chain. Many IDE tools have problems with that as its not typed. Other prominent examples of such conflicting method names are: render, save, add (addItem, addChild, ...), move, create, etc. But I have seen more complex examples too. However: Even if I can't produce an example that satisfies your need: You asked for a reason as to why overloading is important to people. This is a reason, and its a valid one. There is no proper workaround we just live with it somehow during or daily coding. To conclude my answer to your question: These are the positive impacts of overloading to a language I know of: *) It becomes possible to implement two interfaces with different method signatures for the same method name. *) It becomes possible to add a parameter to a method in a extended class -> toString() + toString(radix); *) Instead of a run-time-type-check-switch: Overloading with compile-time-type-checks are faster. *) Instead of a run-time-type-check-switch: Overloading might never throw an exception due to wrong type. *) Documentation of different logic for differently typed arguments is shorter (no explanation as to which types are allowed). *) IDE's can support different argument types (shows a list of arguments allowed) and the documentation for this particular method. There are also negative impacts I am aware of: *) Coding out all variants of all combinations of arguments can be horrible. (but: in as3 we have ...args) *) Two methods have a bigger impact on the swf size than one method with an if switch. *) People might be confused as to which method they should use (although: I never had that problem) Right now there are various, hack-like ways to workaround the "having-no-overloading" problem. As overloading is optional (no-one needs to use it) the negative impacts are all voluntary. So: Yes we can live without overloading (we did so for quite a while) but I think it would be good for the overall code quality to have it on board. yours Martin. On 17/01/2012 00:52, David Arno wrote: >> From: Martin Heidegger [mailto:mh@leichtgewicht.at] >> Sent: 16 January 2012 15:22 >> >> To fix it by renaming one would need a complex naming convention in > interfaces like: >> function<namespace>_<functionality>() >> >> Because if we create a interface we don't know in advance what other > interface >> from other frameworks it might be used with it: that means you better > write something >> like this: >> >> interface IContext { >> function get robotlegs_core_destroyed(): Boolean; } >> >> because some other interface might have another getter for "destroyed" >> of which the robotlegs team doesn't know in advance. > > Unless you are creating an overly-large class, it is incredibly unlikely > that you'll need one class to implement two interfaces that both contain the > method "destroy". Remember, if you aim for classes to be single purpose, > this problem should not occur. Or do you know of a real-world situation > where it can occur? > >> (btw.: Complaining about non-descriptive function name in a >> example without implementation is tedious) > Sorry, I wasn't complaining about "test", I was just using it as an example. > The better the name the less likely that there would be a conflict. Guess it > didn't come across that way :) > > David. > >
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201201.mbox/%3C4F145043.6020109@leichtgewicht.at%3E
CC-MAIN-2013-48
refinedweb
650
62.17
A navigation bar. More... #include <Wt/WNavigationBar> A navigation bar. Adds a form field to the navigation bar. In some cases, one may want to add a few form fields to the navigation bar (e.g. for a compact login option). Adds a menu to the navigation bar. Typically, a navigation bar will contain at least one menu which implements the top-level navigation options allowed by the navigation bar. The menu may be aligned to the left or to the right of the navigation bar. Adds a search widget to the navigation bar. This is not so different from addFormField(), except that the form field may be styled differently to indicate a search function. Adds a widget to the navigation bar. Any other widget may be added to the navigation bar, although they may require special CSS style to blend well with the navigation bar style. Sets whether the navigation bar will respond to screen size. For screens that are less wide, the navigation bar can be rendered different (more compact and allowing for vertical menu layouts). Sets a title. The title may optionally link to a 'homepage'.
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WNavigationBar.html
CC-MAIN-2021-31
refinedweb
189
68.16
! A This is a common question. This month, I'll examine what happens on the server so you can avoid some common pitfalls. Which of these methods is better? To do this, declare the interface in ManagedClient. If you decide on this technique and then experience a slowdown, you'll have to determine the origin of the slowdown and if it can be resolved. You can give these five child controls their own templates, or you can use the existing templates and just assign some properties. In order to access the new COM method on the managed client, I copy over the DLL to the same directory where I have my managed client. I'll also explain how some other features, such as the DataTable's Compute method and the SetOrdinal method, can be used to address common business needs. Once the filter and the expression have been formed, they are passed to the Compute method and the result is sent to the textbox. The GC is not responsible for cleaning up the stack because the space on the stack reserved for a method call is automatically cleaned up when a method returns. When the source parameter is specified on the query string and set to true, the module kicks in. The CCU file maintains an up-to-date copy of the CodeDOM structure of the page ready to service these requests. NET scavenges the compilation folders and removes stale resources periodically when an application is altered and a recompilation is required, however the size of the subtree rooted in Temporary ASP. If we continue with g again, we'll break at the call to ManagedMethod, at which point we can use the SOS command ! In other words, once the module is installed, if you place a call to test. The contents of the factory class changes depending on the batch compilation setting. Adam Nathan's COM interop tome, . In this month's column, I will concentrate on some of the questions I am asked most often regarding data manipulation with ADO. When the source parameter is specified on the query string and set to true, the module kicks in. A stack is allocated on a per-thread basis and serves as a scratch area for the thread to perform its work. The FileHash value represents a snapshot of the state of dependencies, while Hash represents a snapshot of the state of the current page file. After the data is loaded into the DataTable using the DataAdapter's Fill method, a column named ExtendedPrice of type decimal is added to the DataTable. Give it a name such as COMClient. bpmd ManagedClient ManagedClass. The attributes maxBatchSize and maxBatchGeneratedFileSize let you limit the number of pages packaged in a single assembly and the overall size of the assembly. In general, you don't want users to wait too long when a large number of pages are compiled the first time. This command outputs a managed DLL called MSDNCOMServerLIB. This new column must be created after the child and parent DataTables have been created, the DataRelation between the two has been established, and the OrderTotal column has been created. A Calculated fields can be created easily by either using a calculated expression in a SQL statement or creating an expression-bound DataColumn. NET application requires only one extra line in the web. Adam Nathan's COM interop tome, . By default, application pages are compiled in batch mode, meaning that ASP. However, if the data changes quite often, this solution has a major flaw in that the loaded data will become stale very quickly. First, notice that the library MSDNCOMServerLib is translated to namespace MSDNCOMServerLib. If you accidentally delete the subtree of an active application, don't fret. While expression-based columns operate on a single row at a time, the DataTable's Compute method allows you to perform operations on a set of rows, given a filter and an expression. This article discusses:Understanding memory leaks in managed appsUnmanaged memory used in . NET stored and how are they used to serve page requests? The processing of an ASP. aspx pages that make up an application are compiled in the same temp folder, even if they have the same name and reside in different folders. The first contains the partial class to complete the class in the code file and the actual page class derived from that to serve the request. If batching is turned off, each page originates its own assembly.
https://lists.debian.org/debian-qa-packages/2007/05/msg00198.html
CC-MAIN-2017-09
refinedweb
748
62.27
What are the benefits of diversification to an investor 1. Is risk aversion a reasonable assumption? What is the relevant measure of risk for a risk averse investor? 2. What are the benefits of diversification to an investor? what is the key factor determing the extent of these benefits? 3. Total risk can be decomposed into systematic risk. Explain each component of risk, and how each is affected by increasing the number of securities in a portfolio. Solution Preview Hi there, Here are your answers: Risk Aversion: Describes an investor who, when faced with two investments with a similar expected return (but different risks), will prefer the one with the lower risk.. For a more general discussion see the main article risk. Example A person is given the choice between a bet of either receiving $100 or nothing, both with a probability of 50%, or instead, a certain (100% probability) payment. Now he is risk averse if he would rather accept a payoff of less than $50 (for example, $40) with probability 100% than the bet, risk neutral if he was indifferent between the bet and a certain $50 payment, risk-loving (risk-proclive) if it required that the payment be more than $50 (for example, $60) to induce him to take the certain option over the bet. The average payoff of the bet, the expected value would be $50. The certain amount accepted instead of the bet is called the certainty equivalent, the difference between it and the expected value is called the risk premium. source: Benefits of diversification & Key Factors: How ...
https://brainmass.com/business/foreign-exchange-rates/what-are-the-benefits-of-diversification-to-an-investor-85082
CC-MAIN-2017-04
refinedweb
263
52.6
23 June 2009 04:22 [Source: ICIS news] By Helen Yan SINGAPORE (ICIS news)--Asian acrylonitrile (ACN) producers will continue to target India and the Middle East as Chinese demand slumps due to an influx of deep-sea cargoes there, traders and producers said on Tuesday. “The tanks are overflowing and we have enough ACN supply to last until the end of July, so we are not importing any cargo from the Asian producers,” a Chinese trader said. Chinese traders had capitalised on the opening of the arbitrage window from the US Gulf to ?xml:namespace> As a result, they had procured large quantities of lower-cost, deep-sea ACN material from the US Gulf. About 10,000 to 15,000 tonnes of deep-sea material from the US Gulf had been arriving in The slump in Chinese demand had prompted the Japanese, Taiwanese and Korean ACN producers to target other markets such as “Although demand in “The Chinese market is not buying at the moment as they have too much supply but demand from Asian ACN producers have been seeking prices above $1,200/tonne(€864/tonne) CFR (cost and freight) Asia but Chinese buying indications for imports had dipped below $1,100/tonne CFR, weighed down by the weak domestic ACN prices in China. Domestic ACN prices in On the other hand, trades into “We expect demand from ($1 = €0.72/ $1 = CNY6.84)
http://www.icis.com/Articles/2009/06/23/9226673/asia-acn-producers-target-india-middle-east-as-china-slows.html
CC-MAIN-2015-22
refinedweb
236
56.63
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 2.0 - - Component/s: lang.time.* - Labels:None - Environment: Operating System: All Platform: All Description Try this using a Central European TimeZone: import java.util.Calendar; import org.apache.commons.lang.time.DateUtils; Calendar cal = Calendar.getInstance(); cal.set(Calendar.MONTH, Calendar.MARCH); cal.set(Calendar.YEAR, 2003); cal.set(Calendar.DAY_OF_MONTH, 30); cal.set(Calendar.HOUR_OF_DAY, 5); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); Date date_20030330 = cal.getTime(); Date expDate = DateUtils.truncate(date_20030330, Calendar.DATE); System.out.println(expDate.toString()); -> Sat Mar 29 23:00:00 MET 2003 instead of Sun Mar 30 00:00:00 MET 2003 If the calendar instance represents a date AFTER the daylight savings time switch and will be truncated to a time BEFORE the daylight savings time switch, then the resulting date is wrong. (Daylight savings time was Sun Mar 30 02:00:00 resetting to 01:00.00.) Might also happen when rounding up dates over the daylight savings time switch... Activity After some more testing, the light bulb came on as to why the Calendar.add method seemed to be overshooting the correct date and subtracting 6 hours instead of five. This is probably a "duh" for most everyone else, but it took me a few minutes to see it. The DateUtils.modify method, which does the gruntwork of the round and truncate methods, was subtracting the number of hours from the date being rounded/truncated. In this paricular case, it was subtracting 5 hours to get from 05:00 to 00:00. However, since the scenario happened to cause this across a DST enablement, Calendar.add was treating the time between 02:00 and 03:00 as non-existing, so subtracting 5 hours from 05:00 was giving 23:00 on the previous day, which is technically correct. But for a rounding/truncation method, this is not appropriate, so my earlier solution of calculating the desired hour value and using set does appear to be appropriate. Note that I did a few tests on rounding across the 02:00/03:00 barrier, and the rounding also "ignores" the missing hour. So for example, rounding 01:40 gives 03:00, and rounding 02:40 give 04:00, which IMHO are both correct, since the 02:00-03:00 hour does not exist. The truncate/round logic was adding a negative value to the current hours (using Calendar.add) to move the hours back to zero. When this is done across the beginning of daylight saving time, it has the affect of moving to 23:00 of the previous day, because the hour skipped when daylight saving time doesn't exist. For example, if daylight saving time begins at 02:00, and -5 is added to 05:00, the result is 23:00 of yesterday, because the hour between 02:00 and 03:00 doesn't exist. The fix was to do the calculation in the truncate/round code, and use Calendar.set to set the new hour value. Just hit this bug using 2.0. This is what happens to those who work on Sundays . Very delighted that fix exists. I'm about to give my app to the client and don't want to use current CVS or custom-patched library. Found a workaround: DateUtils.truncate(date, Calendar.DATE) use: DateUtils.round( DateUtils.truncate(date, Calendar.DATE), Calendar.DATE) Call to round eliminates any 1-hour discrepancy which may exist. 2.1 released, closing. The appears to be fixable with a minor change to the DateUtils.modify method, replacing the val.add call with a val.set call, doing the calculation external to the Calendar instance. For some reason, the Calendar.add method subtracts an additional hour in the particular test scenario mentioned in the bug report, but doesn't with other timezones I tested (such as US/Eastern). So I think the easiest fix is to do the calculation in our code. I have changed the DateUtils.modify method and the existing tests pass. I've written one test to verify the fix works with the above scenario, and I plan to write some additional tests for rounding.
https://issues.apache.org/jira/browse/LANG-13?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
CC-MAIN-2015-48
refinedweb
704
58.79
Community Discussion board where members can learn more about Integration, Extensions and API’s for Qlik Sense. Hi all, Is there a way to trigger a Qlik Sense reload using AWS Lambda? Anyone has a some sample code to share? Thank you I've done this using the qsAPI package GitHub - rafael-sanz/qsAPI: QlikSense python API client for QPS and QRS interfaces I added a "StartTask" method to the library. I suppose I should submit a pull request to GitHub, but for now here is the method I added: ''' @Function: Start a task by name @param pName: Task Name @return : json response return self.driver.post('/qrs/task/start/synchronous', param={'name':pName}).json() And here's my AWS Lambda function: def lambda_handler(event, context): import qsAPI import os PROXY = os.environ['proxy'] qrs=qsAPI.QRS(proxy=PROXY, certificate='client.pem') qrs.StartTask('ReloadCallCenterStatus') "proxy" is an environment variable that contains my Qlik Sense server address. -Rob
https://community.qlik.com/t5/Qlik-Sense-Integration-Extensions-APIs/How-to-trigger-a-Task-reload-using-AWS-Lambda/td-p/6217
CC-MAIN-2020-24
refinedweb
158
60.65
by Mike Wooten 09/26/2007 JAX-WS (Java Architecture for Web Services) is a standards-based API for coding, assembling, and deploying Java Web services, designed to replace JAX-RPC. JAXB (Java Architecture for XML Binding) is a Java/XML binding technology. JAX-WS uses JAXB to handle all the Java binding chores. This article provides an overview of the JAX-WS 2.0 and JAXB 2.0 support in BEA WebLogic Server 10.1. I include sample code to get you started. JAX-WS uses JAXB to handle all the Java binding chores, so I will primarily be discussing JAXB as it relates to JAX-WS. A skilled Java developer is typically also a very busy one. This being the case, I'll limit the discussion to: For those of you who are too busy to even read this article, feel free to just go ahead and download the article's Download sample code. There is a README file inside the zip, which walks you through all the steps to get things working. Here's a list of some of the more interesting stuff you can do with JAXB 2.0. This is not to say that you can't do the same things with other Java-to-XML/XML-to-Java binding technologies. It's merely stating what you can do with JAXB 2.0: <xs:schema>elements. These <xs:schema> elements can use <xs:import>and <xs:include>elements to reference other <xs:schema>elements. Here's a list of some of the stuff you can't (or I didn't see a way how to) do with JAXB 2.0: JAX-WS is a standards-based API for coding, assembling, and deploying Java Web services. It uses JAXB to handle all the Java binding chores associated with this. JAX-WS 2.0/2.1 doesn't support the use of JAX-RPC or Apache Beehive XMLBean types—just JAXB ones. JAX-WS provides two programming models for developing a Web service endpoint: Start from Java—This programming model provides you with a lot of control over the Java data types used in the method signatures of your Web service endpoint. Here, you hand-code (or use a tool to generate) the Java objects that will be used as the input arguments and return value of Web service operations, along with JWS annotations. BEA provides the jwsc Ant task for the "Start from Java" programming model. It wraps (that is, invokes) the Glassfish wsimport Ant task internally, so I don't directly use that Ant task in the build.xml. The Ant Task Reference for jwsc BEA documentation describes how to use the <jws> element's type="JAXWS" attribute to generate JAXB artifacts. The jwsc Ant task has a <binding> child element for specifying the JAXB binding customization file to use. Start from WSDL This programming model generates the skeleton code for your Web service endpoint, from the contents of a WSDL. The <xs:schema> sections in the WSDL are used to generate the Java data types used as the input arguments and return value, of Web service operations. BEA provides the wsdlc Ant task for the "Start from WSDL" programming model. It wraps the Glassfish wsimport Ant task internally, so I don't directly use that Ant task in the build.xml. The Ant Task Reference for wsdlc BEA documentation describes how to use the <wsdlc> element's type="JAXWS" attribute to generate JAXB artifacts. The wsdlc Ant task has a <binding> child element for specifying the JAXB binding customization file to use. The remainder of the article walks through the process of creating, deploying, and testing a sample POJO-based (Plain-Old Java Object) JAX-WS Web service endpoint, named DataStagingService. The first step involves creating a JAX-WS customization file. This doubles as a JAXB binding customization file, and allows you to control the JAX-WS and JAXB build-time processes, as well as the artifacts produced by them. The customization file is an XML document that conforms to the XML schemas for the and namespaces. The JAX-WS customization file for this DataStagingService Web service is pretty small, so I've included it here: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <bindings wsdlLocation="DataStagingService2.wsdl" > <bindings xmlns: <package name="services.datastaging"> <jxb:javadoc> <![CDATA[<body>Package level documentation for generated package services.datastaging.</body>]]> </jxb:javadoc> </package> <jxb:schemaBindings> <jxb:package </jxb:schemaBindings> </bindings> </bindings> I basically just use the customization file to control the Java package name of classes that are generated. You can do a lot more than this in one of these files. The next listing contains the XML Schema used in the WSDL, for the DataStagingService Web service: <xs:schema <xs:complexType <xs:sequence> <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:sequence> <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element </xs:schema> The XML Schema section from this WSDL is pretty normal-looking. It's your basic garden variety, global complexType elements (with anonymous and explicit content models) combined to create request and response messages. DataStagingService is a doc/literal style Web service, so global elements have been included for use with the WSDL's <part> element.
http://www.oracle.com/technetwork/articles/entarch/jax-ws-jaxb-087545.html?ssSourceSiteId=otnes
CC-MAIN-2015-35
refinedweb
875
53.71
I’ve previously posted python code to check if a field index exists for both ArcGIs 9.3 and ArcGIS 10.0. Recently I have been working on a process that was using this code but it was not working because it looks for an index with a specific name. It was not working in this case because the name of the indexes was getting incremented as they were being created. For example, I was building an index on the table C5ST, field RelateId ([C5IX].[Relateid]) named I_C5IX_RelateId. That worked fine until we switched our process so now we keep multiple versions of some tables, each with a date-based suffix. We now have tables name C5St_20110625 and C5St_20110626–the Index-name scheme, however was still creating I_C5IX_RelateId and it worked great on the first one. But when it created the second one, even on a different table, it was automatically name I_C5IX_RelateId_2 even though the name I_C5IX_RelateId was used when trying to create the index. Before generating relates, our code checks to see if the key fields are indexed, and if they are not, builds an index. Because of the naming situation, multiple, duplicate indexes were being created. Probably not too harmful but it is a little messy. So I re-wrote the code so that you pass the function the table name and field name that you want to check and it checks to see if there is an index existing for that field and return a Boolean. The one little wrinkle I put in is to account for indexes that span multiple fields–the ” if (iIndex.fields[0].Name.upper() == fieldname.upper()):” statement is checking the index to see if it is on a single field or multiple fields. def fieldHasIndex(tablename,fieldname): if not arcpy.Exists(tablename): return False tabledescription = arcpy.Describe(tablename) for iIndex in tabledescription.indexes: if (len(iIndex.fields)==1): if (iIndex.fields[0].Name.upper() == fieldname.upper()): return True return False
http://milesgis.com/2011/07/05/checking-to-see-if-a-field-index-exists-using-arcpy-argis-10-0-redux/
CC-MAIN-2017-51
refinedweb
329
65.42
Mini Batch Gradient Descent Because of the way that we vectorize our implementations of forward and back propagation, the calculations on each step are done in a single matrix multiplication operation. This is great for performance’s sake, but at scale, it represents an issue. Because most deep-learning applications tend to amass huge datasets to get piped into them, it becomes increasingly difficult to perform all of these in-memory computations when you can’t, well, hold everything in memory. Mini Batch: A Solution If we refer to the “do it all at once” training as Batch Gradient Descent, then Mini Batch Gradient Descent involves splitting our original dataset up into a handful of smaller datasets, then running each of them through the algorithms we’ve been using. In pseudocode: for batch in batch_dataset(X, y): forward_prop() cost_fn() back_prop() update_weights() It’s important to note that the batch_dataset() has two steps: - Shuffling the dataset, maintaining matching X, ypairs - Partitioning the shuffled data into several, smaller batches Cost Function Over Time With batch gradient descent, our cost function monotonically decreased over time as we continually iterated over the dataset. from IPython.display import Image Image('images/batch_descent.png') Alternatively, because we’re taking random samples of data, some samples may yield more error than others within the same iteration, so we get this local oscillation / global decrease behavior that makes for less-neat performance investigation. from IPython.display import Image Image('images/mini_batch_descent.png') Determining Batch Size So how many batches should we break our data up into? Consider a dataset with m training examples. At two extremes we can have: m batches. - This undoes all of the performance gains that we got with vectorization 1 batch - This is just batch gradient descent, which, again, we can’t fit into memory And for Computational Reasons™, our batches should always be a power of 2. Typically in 64, 128, 256, 512. Be careful to ensure that your batch fits into memory! When to Even Bother? If you have a small training set ( m < 2000), just use batch gradient descent.
https://napsterinblue.github.io/notes/machine_learning/neural_nets/mini_batch/
CC-MAIN-2021-04
refinedweb
348
50.57
Challenges I face(d) with Ember.js Ember.js is one of the popular JavaScript front-end frameworks. I have been using Ember.js since the mid of 2017. However, I still get puzzled with it. ember-cli the primary command line tool for any Ember.js project has been of great help but it also created a lot of magical experiences. It takes a lot of dedicated effort and intelligence from the community to create and maintain such a framework and tooling. Often time Ember.js is compared with magic. There is a lot of intelligence behind every magic, and Ember.js is no exception. Magic is great, I guess we all love magic, but it is an illusion and not a reality. As a commoner, I get overwhelmed rather perplexed in this magical land of Ember.js, quite often. In this post, I would like to share my experience, the challenges that I face(d) with Ember.js. This post is not to criticize the effort that the community has put forward. Thank you all the Ember.js community for your contribution. 🙏 The challenges The challenges that I primarily face(d) are related to: - Ember Object Model - Handlebars - Convention over Configuration Ember Object Model I found and Ember Object Model extremely difficult to get my head around. Grammar/syntax related to Ember Object Model For example, consider the following code snippet. filteredCollection: computed( 'someObject.someCollection.@each.{someProperty,someOtherProperty}', function() { const someCollection = this.get('someObject.someCollection'); let filteredCollection; // some computation to update filteredCollection return filteredCollection; } ) When I noticed 'someObject.someCollection.@each.{someProperty,someOtherProperty}' for the first time, I had no clue how to decipher it. I also could not understand why I needed const someCollection = this.get(‘someObject.someCollection’); when I could do const someCollection = this.someObject && this.someObject.someCollection;. Similarly, I did not understand why I needed this.set('someObject', someObject) when I could do this.someObject = someObject. I still get confused with these syntactic sugars, which are the fundamentals of Ember Object Model, to observe any changes in the data and update the UI accordingly. In my humble opinion, these look more like a Domain Specific Language. As of Ember 3.1 it is possible to use this.someObject instead of this.get('someObject'), however, for setting a value, one still needs to use this.set('someObject', someObject). Refer Ember.js 3.1 release blog post for more information. Ember object and prototype chain The understanding of object creation in Ember ecosystem can be challenging too. All the properties passed as the object hash to the extend method, gets associated with the prototype and will be shared across every instance. Consider the following example: const MyObject = Ember.Object.extend({ actions: { foo() { console.log('hi'); } } });const one = MyObject.create(); const two = MyObject.create();one.hasOwnProperty('actions') // false Object.getPrototypeOf(one).hasOwnProperty('actions') // truetwo.hasOwnProperty('actions') // false Object.getPrototypeOf(two).hasOwnProperty('actions') // trueone.actions.foo(); // hi Updating the action.foo on any instance will impact every other instance. one.actions.foo(); // hitwo.actions.foo = function() { console.log(‘hello’) }one.actions.foo(); // hello Similarly, if the actions of an instance of any component is modified it will cascade to all the instances. Consider the following example: //some-component-test.jsconst SomeComponent = this.owner.lookup('component:some-component');SomeComponent.actions.someAction = function() { /** * Now action.someAction for every SomeComponent * will refer to the new implementation */ } Now action.someAction for every SomeComponent will refer to the new implementation. Ignorance about this fundamental can create leakage in tests. Handlebars I find handlebars extremely challenging. There is a lot to learn before one can get productive in Ember.js. Handlebars is a Domain Specific Language on its own and its lisp-like syntax gets in my way every now and then. I have seen the use of handlebar helpers getting abused, business logic which should be part of the .js files, being pushed to the applications' handlebar helpers. One could argue that the handlebars helper is nothing but a .js file. But I fail to understand why to invoke a JavaScript function from a .hbs file using a lisp-like syntax when the same could have been done in the corresponding .js file. If there was a way to write inline JavaScript in .hbs file like .jsx, then I would not have to write helpers for simple iteration and logical expression. Let’s consider the following scenario, where I would like to loop for a desired number of times, to repeat the same markup fragment: Ember.js // { helper } from '@ember/component/helper'; import { typeOf } from '@ember/utils';export function repeat([length, value]) { if (typeOf(length) !== 'number') { return [value]; } return Array.apply(null, { length }).map(() => value); // eslint-disable-line }export default helper(repeat);//components/progress-bar.jsimport Component from '@ember/component'; import { computed } from '@ember/object'; import progressBar from './app/components/user-success/progress-bar';export default Component.extend({ slots: computed('numberOfSlots', function() { const numberOfSlots = this.get('numberOfSlots'); return numberOfSlots || 5; }), });//templates/components/progress-bar.hbs<div class="progress-bar"> {{#each (repeat slots)}} <div class="slot"></div> {{/each}} </div>// Usage: // in some-where-else.hbs {{progress-bar numberOfSlots=5}} React // ProgressBar.jsimport React from 'react';function ProgressBar(props) { const numberOfSlots = props.numberOfSlots || 5; const slots = []for (let i = 0; i < numberOfSlots; i++) { slots.push(<div class='slot'></div>); } return ( <div class="progress-bar"> {slots} </div> ) }// Usage: // in SomeWhereElse.js<ProgressBar numberOfSlots={5} /> Native JavaScript function progressBar(numberOfSlots = 5) { const slots = []; for (let i = 0; i < numberOfSlots; i++) { slots.push(`<div class='slot'></div>`); } const progressBar = document.createElement('div'); progressBar.innerHTML = slots.join('');return progressBar; }// Usage: // in SomeWhereElse.jsprogressBar(5); The usage in the Ember.js, React and Native JavaScript looks almost the same other than the syntax. However, the implementation in the Ember.js is a little complicated in my opinion. Someone with knowledge of JavaScript could understand the React code easily than the Ember.js one. Also, the React example looks very similar to the Native JavaScript implementation. In the React example, I used for loop, similar to the Native JavaScript implementation. However, in the Ember.js example, I needed to understand how helper works and use two helpers to create the desired iteration. There is no loop available in handlebars, the only way to achieve a looping is to create an array of the desired length and iterate over it using the each helper. Also, the React example and the Native JavaScript implementation example, needed a single file. However, the Ember.js example implementation spans across three files. The concerns of spreading across multiple files is not a huge deal during the initial implementation, but it gets challenging during maintenance, to keep track of the implementation across several locations. Tracking the source of the implementation By looking at the .hbs file it is not evident if the repeat helper is from the application code or from an addon. If it is from an addon, then which addon? The only way I find it is by running a find . -name repeat.js | grep 'helpers/repeat.js'. This is true for any code being consumed from an addon. I don’t know if there is a better way to do that. Handlebar’s grammar and syntax Things get more confusing with the as |alias|syntax, primarily at contextual components template syntax. Understanding and getting used to this syntax takes time. Also to keep track of how things are being passed I have to keep switching between multiple .js and .hbs files. I guess I will face similar situations when contextual-helpers becomes mainstream. A deep understanding of Handlebar’s vast grammar and syntaxes is very much needed to get productive in Ember.js. One could say that JSX is also a DSL. For sure it is, but with very limited grammar and most of it looks quite similar to native HTML construct. I wish I could avoid handlebar helpers with regular utils or inline JavaScript and avoid the burden of learning yet another Domain Specific Language. Is it a helper or a component? Often time just by looking at the code it is not clear if it is a component or a helper usage. For example, link-to. states about link-to in the Ember.Templates.helpers section but describes it as a component. I often find myself lost in this ambiguity. Origin of the property/attribute Identifying the origin of a property/attribute is not possible just by looking at the .hbs file. It gets more complicated when it involves multiple levels of components. To identify the origin of the information, I have to continuously search for the desired property/attribute in the corresponding .js and .hbs files and traverse up the chain. I wish there was a way for me to identify which property is part of the current component and which has been passed own by looking at the corresponding .js and .hbs file. In React this.props and this.state makes it easy to identify. Moreover, the .hbs and the corresponding .js files are far apart in the filesystem which makes this tracking more difficult. Fuzzy search in the editor helps but I wish I didn’t need a separate .hbs file and could have the rendering code as part of the .js file, then I could avoid the constant context switch between .js and .hbs files. Convention over Configuration When files are created in an Ember.js project using ember-cli, they are generated in the corresponding directory with respect to the functionality. For example, when a service named some-service is created using ember-cli it gets created in the app/service directory. ember-cli also generates the corresponding some-service-test.js file, in this case, a unit test. $ ember g service some-serviceinstalling service create app/services/some-service.js installing service-test create tests/unit/services/some-service-test.js If a component was created, ember-cli will create the corresponding .js, .hbs and -test.js file, in this case, an integration test. ember g component some-componentinstalling component create app/components/some-component.js create app/templates/components/some-component.hbs installing component-test create tests/integration/components/some-component-test.js This is great! 🎉 As a developer, I don’t have to think about where the files need to be created. ember-cli has done it for me. This is primarily possible due to the convention over configuration. ember-cli also scaffolds the bare metal structure of the file’s content. This extremely helpful during initial development. Magical filename resolution Ember.js runtime is based on dependency injection which also relies on the same convention. For example, to consume a service named some-service I can just inject it using the name and Ember.js will figure out how to make it available. import { inject as service } from '@ember/service';... someService: service('some-service'), ... Note that I didn’t have to specify the full path of the some-service.js file. Ember.js would automatically inject some-service. I could have also omitted the some-service. someService: service(), Ember.js will translate someService to some-service and will inject it automatically. This initially threw me off the cliff when I was modifying existing code. I had no idea what was going on. This still creates an unpleasant workflow for me. My editor will not autocomplete the methods available at this service. I have to manually look up the file and refer it. This concern is valid for any file in an Ember.js project. Due to this convention and implicit linking/bindings, things become more complicated if the implementation is from an addon. At times it is not obvious what is going on. Black magic For example, when I was working on an existing route I came across the following code: import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; import { hash } from 'rsvp';export default Route.extend({xhr: service('xhr'),. . .prefetch() { return hash({ completionMeterData: this._getData(), }); },_getData() { return this.get('xhr').fetch(API_ENDPOINT); }, }); I had no clue how the prefetch method was being processed. Ember.js Route API documentation has no mention about it. Eventually, I tracked it down to ember-prefetch addon using find command. Searching for prefetch.js in the project’s node_modules directory I found a couple of prefetch.js to be part of ember-prefetch addon. This is pretty much the workflow if the fuzzy file lookup in my editor fails. The node-modules directory is excluded from my editor’s file lookup and thus if the implementation is not part of the application’s code my editor does not display any match. Looking at the ember-prefetch implementation, I guess the files need not be named prefetch.js but it was helpful for me to track it down. However, it was quite a time-consuming process. This is just one example, there are several such instances when the code is part of an addon. By looking at the application’s codes it is extremely difficult to identify how things come together, due to this implicit or rather magical bindings, partly at the build time and partly at the runtime. I wish the files were explicitly referenced, then I could easily identify the origin of any code using the reference created by my editor. Conclusions I see a lot of improvement with respect to developer ergonomics happening in Ember.js, however, it still requires a steep learning curve. I wish someday Ember.js will be less magical and more accessible to any JavaScript developer. If you are an Ember.js Pro/Core contributor and happen to read this post, please feel free to correct me if I have mistaken. I would be more than happy to update the post with any corrections. However, if you happen to find my concerns valid, please do the needful to address them. Once again, thank you all the Ember.js contributors for your hard work and time. 🙏
https://medium.com/@sarbbottam/challenges-i-face-with-ember-js-59bfba30416e
CC-MAIN-2019-30
refinedweb
2,327
52.15
Displaying Forecast Info on OLED with Raspberry Pi In this tutorial, you'll learn how to acquire data from a webpage and how to present that data on an OLED Wireling on your Raspberry Pi. This tutorial makes a simple daily weather summary display, but this concept can be applied to any kind of data from any webpage for whatever your particular application is. Materials Hardware Software - Python 3 (Python 2 is not supported!) - All Python packages mentioned in the Pi Hat setup tutorial (tinycircuits-wireling, adafruit-circuitpython-ads1x15, and adafruit-circuitpython-busdevice) - SSD1306 Python package - liveweather Python Example - An internet connection Hardware Assembly If you haven't already, attach the Wireling Adapter Raspberry Pi Hat to your Raspberry Pi. Then, use a Wireling cable to plug the 0.96" OLED Wireling into Port 0. Finally, plug in your Micro USB cable to power the Pi. Software Setup Assuming that you have gone through the Pi Hat Setup Tutorial and OLED Wireling Python Tutorial, all necessary packages for this project should already be installed on your pi. ThingSpeak ThingSpeak is an IoT analytics platform service that allows you to request information displayed on a website. To configure ThingSpeak for this project, you will need to create a ThingSpeak account. Once you are signed in, go to Apps > ThingHTTP. Then, click the "New ThingHTTP" button to create a new ThingHTTP request. From here, you will need to fill out three fields: Name, URL, and Parse String. The Name is simply a label for the request - be specific enough so that you do not get lost trying to track down the correct request later! Example.) Weather-Akron-Condition The URL field is the full URL of the webpage you are requesting data from. For this project, we are using a weather site: Example.) Finally, the Parse String is a reference to the particular piece of data you want this request to handle. You obtain this field by going to the webpage you would like to request data from, right clicking on the piece of data you would like, then selecting "inspect element." Once in the DOM and style inspector, the element you selected should already be selected. Right click the HTML tag containing the data you would like to get, select "Copy," and then "XPath." You will then paste the XPath into the Parse String field. ex.) /html/body/div[3]/div[1]/span[1]/span[2]/span[1] You may need to append /text() to the XPath, as some HTML remnants may persist otherwise. ex.) /html/body/div[3]/div[1]/span[1]/span[2]/span[1]/text() After this, click the "Save ThingHTTP" button to save your changes. This should allow you to view your ThingHTTP request details. On the right side of your screen should be the ThingSpeak ThingHTTP request URL. You can paste this URL directly into your browser's address bar to view the output of the request. It is a good idea to check this before pasting the URL into the code! Repeat this process for any other pieces of data you would like to use for your project. Upload Program Create a new python file in whichever directory you choose by typing the following into the terminal: liveweather.py This will open up the text editor nano where you can simply paste the following code: Code import board import busio import tinycircuits_wireling from digitalio import DigitalInOut from PIL import Image, ImageDraw, ImageFont import adafruit_ssd1306 from time import sleep import requests import datetime # Create the I2C interface. i2c = busio.I2C(board.SCL, board.SDA) # A reset line may be required if there is no auto-reset circuitry reset_pin = DigitalInOut(board.D10) # reset pin for Port0 # Initialize and enable power to Wireling Pi Hat wireling = tinycircuits_wireling.Wireling() OLED96_port = 0 wireling.selectPort(OLED96_port) # Create OLED Display Class display = adafruit_ssd1306.SSD1306_I2C(128, 64, i2c, addr=0x3c, reset=reset_pin) # 0.96" Screen # Initialize display. display.fill(0) display.show() #font = ImageFont.load_default() font = ImageFont.truetype('/home/pi/.fonts/Minecraftia.ttf', 8) bigfont = ImageFont.truetype('/home/pi/.fonts/Minecraftia.ttf', 10) while True: print("Updating...") # request forecast data via ThingSpeak -- be sure to use your own URLs here! weather = requests.get('') # print("got weather") tempHi = requests.get('') tempHiTime = requests.get('') # print("got tempHi") tempLo = requests.get('') tempLoTime = requests.get('') # print("got tempLo") precip = requests.get('') # print("got precip") hum = requests.get('') # print("got hum") wind = requests.get('') # print("got wind") time = datetime.datetime.now() # Create blank image for writing to screen display.fill(0) image = Image.new('1', (display.width, display.height)) draw = ImageDraw.Draw(image) # CONDITION print ("Weather: " + weather.text) draw.text((0, 0), weather.text, font=font, fill=255) # TEMPERATURE print ("Temperature (Lo/Hi): " + tempLo.text + " @" + tempLoTime.text + " / " + tempHi.text + " @" + tempHiTime.text) draw.text((0, 20), tempLo.text + " @" + tempLoTime.text + " / " + tempHi.text + " @" + tempHiTime.text, font=font, fill=255) # PRECIPITATION print ("Precipitation: " + precip.text + "in") draw.text((0, 30), "Precip: " + precip.text + "in", font=font, fill=255) # HUMIDITY print ("Humidity: " + hum.text + "%") draw.text((0, 40), "Hum: " + hum.text + "%", font=font, fill=255) # WIND print ("Wind: " + wind.text + "mph") draw.text((64, 40), "Wind: " + wind.text + "mph", font=font, fill=255) # TIME STAMP print ("Last Updated " + time.strftime("%b %d %H:%M:%S")) draw.text((0, 54), "Updated " + time.strftime("%b %d %H:%M:%S"), font=font, fill=255) # update OLED display display.image(image) display.show() sleep(60) # update every 60s You’ll need to change the ThingSpeak request URLs to your own based on the information you want to include in your display. It may be easier to edit this code in a text editor or IDE before pasting into the nano text editor on the Pi. After changing the URLs and pasting the code, press Ctrl+O to write out the buffer, press Enter to save the buffer to liveweather.py, and then press Ctrl+X to exit. Before running this program, be sure that the pigpio service is running on your Pi. This service can be started by typing: sudo pigpiod After this, you will be able to run the program by running the following command in the terminal: sudo python3 liveweather.py The program should now run and the values acquired via ThingSpeak will be printed to the OLED display. Terminate execution at any time by pressing Ctrl+C. It may take a moment to see the first pieces of data since several separate ThingSpeak requests have to be processed each time the loop runs. If you have any questions or feedback, feel free to email us or make a post on our forum. Show us what you make by tagging @TinyCircuits on Instagram, Twitter, or Facebook so we can feature it. Thanks for making with us!
https://learn.tinycircuits.com/Wireling-Projects/Pi-Forecast-Display_Tutorial/
CC-MAIN-2022-33
refinedweb
1,122
59.09
Hi all, I'm trying to get the following test2 function to work like the Array get function; i.e. a function that can return any type that is cast to the correct type when calling the function. What do I need to change. As you can see, test1 works, but I can't get it to work as a function. Array ar = create(1,1) string s = "String" string r int i = ((addr_ s) int) put(ar, i, 0, 0) _m test2(int x, y) { int d = (int get(ar, x, y)) return ((addr_ d) _m) } _m test1 = (_m get(ar, 0, 0)) r = (string test1) print r "\n" r = (string test2(0,0)) print r "\n" delete ar The line calling the test2 function results in an "unassigned variable reference" error. Any help is appreciated. Regards, M_vdLaan
https://www.ibm.com/developerworks/community/forums/html/topic?id=f6f71009-43f5-4cf9-880b-129f9b0562bb&ps=25
CC-MAIN-2018-05
refinedweb
139
83.7
Announcing TypeScript 3.7 RC Daniel, or use npm with the following command: npm install typescript@rc You can also get editor support by - Downloading for Visual Studio 2019/2017 - Following directions for Visual Studio Code and Sublime Text. TypeScript 3.7 RC includes some of our most highly-requested features! - Optional Chaining - Nullish Coalescing - Assertion Functions - Better Support for never-Returning Functions - (More) Recursive Type Aliases --declarationand --allowJs - Build-Free Editing with Project References - Uncalled Function Checks // @ts-nocheckin TypeScript Files - Semicolon Formatter Option - Breaking Changes. So what is optional chaining? Well at its core, optional chaining lets us write code where we can immediately stop running some expressions if we run into a null or undefined. The star of the show in optional chaining is the new ?. operator for optional property accesses. When we write code like let x = foo?.bar.baz(); this is a way of saying that when foo is defined, foo.bar.baz() will be computed; but when foo is null or undefined, stop what we’re doing and just return undefined.” More plainly, that code snippet is the same as writing the following. let x = (foo === null || foo === undefined) ? undefined : foo.bar.baz(); Note that if bar is null or undefined, our code will still hit an error accessing baz. Likewise, if baz is null or undefined, we’ll hit an error at the call site. ?. only checks for whether the value on the left of it is null or undefined – not any of the subsequent properties. You might find yourself using ?. to replace a lot of code that performs. arbitrary strings, numbers, and symbols): /** * Get the first element of the array if we have an array. * Otherwise return undefined. */ function tryGetFirstElement<T>(arr?: T[]) { return arr?.[0]; // equivalent to // return (arr === null || arr === undefined) ? // undefined : // arr[0]; } There’s also optional call, which allows us to conditionally call expressions if they’re not null or undefined. async function makeRequest(url: string, log?: (msg: string) => void) { log?.(`Request started at ${new Date().toISOString()}`); // property accesses, calls, element accesses – it doesn’t expand any further out from these expressions. In other words, let result = foo?.bar / someComputation() doesn’t stop the division or someComputation() call from occurring. It’s equivalent to let temp = (foo === null || foo === undefined) ? undefined : foo.bar; let result = temp / someComputation(); That might result in dividing undefined, which is why in strictNullChecks, the following is an error. function barPercentage(foo?: { bar: number }) { return foo?.bar / 100; // ~~~~~~~~ // Error: Object is possibly undefined. } More more details, you can read up on the proposal and view the original pull request. Nullish Coalescing The nullish coalescing operator is another upcoming ECMAScript feature that goes hand-in-hand with optional chaining, and which our team has been deeply involved in championing. You can think of this feature – the ?? operator – as a way to “fall back” to a default value when dealing with null or undefined. When we write code like let x = foo ?? bar(); this is a new way to say that the value foo will be used when it’s “present”; but when it’s null or undefined, calculate bar() in its place. Again, the above code is equivalent to the following. let x = (foo !== null && foo !== undefined) ? foo : bar(); The ?? operator can replace uses of || when trying to use a default value. For example, the following code snippet tries to fetch the volume that was last saved in localStorage (if it ever was); however, it has a bug because it uses ||. function initializeAudio() { let volume = localStorage.volume || 0.5 // ... } When localStorage.volume is set to 0, the page will set the volume to 0.5 which is unintended. ?? avoids some unintended behavior from 0, NaN and "" being treated as falsy values. We owe a large thanks to community members Wenlu Wang and Titian Cernicova Dragomir for implementing this feature! For more details, check out their pull request and the nullish coalescing proposal repository. Assertion Functions There’s a specific set of functions that throw an error if something unexpected happened. They’re called “assertion” functions. As an example, Node.js has a dedicated function for this called assert. assert(someValue === 42); In this example if someValue isn’t equal to 42, then assert will throw an AssertionError. Assertions in JavaScript are often used to guard against improper types being passed in. For example, function multiply(x, y) { assert(typeof x === "number"); assert(typeof y === "number"); return x * y; } Unfortunately in TypeScript these checks could never be properly encoded. For loosely-typed code this meant TypeScript was checking less, and for slightly conservative code it often forced users to use type assertions.. function yell(str) { assert(typeof str === "string"); return str.toUppercase(); // Oops! We misspelled 'toUpperCase'. // Would be great if TypeScript still caught this! } The alternative was to instead rewrite the code so that the language could analyze it, but this isn’t convenient. function yell(str) { if (typeof str !== "string") { throw new TypeError("str should have been a string.") } // Error caught! return str. function assert(condition: any, msg?: string): asserts condition { if (!condition) { throw new AssertionError(msg) } } asserts condition says that whatever gets passed into the condition parameter must be true if the assert returns (because otherwise it would throw an error). That means that for the rest of the scope, that condition must be truthy. As an example, using this assertion function means we do catch our original yell example. function yell(str) { assert(typeof str === "string"); return str.toUppercase(); // ~~~~~~~~~~~ // error: Property 'toUppercase' does not exist on type 'string'. // Did you mean 'toUpperCase'? } function assert(condition: any, msg?: string): asserts condition { if (!condition) { throw new AssertionError(msg) } } The other type of assertion signature doesn’t check for a condition, but instead tells TypeScript that a specific variable or property has a different type. function assertIsString(val: any): asserts val is string { if (typeof val !== "string") { throw new AssertionError("Not a string!"); } } Here asserts val is string ensures that after any call to assertIsString, any variable passed in will be known to be a string. function yell(str: any) { assertIsString(str); // Now TypeScript knows that 'str' is a 'string'. return str.toUppercase(); // ~~~~~~~~~~~ // error: Property 'toUppercase' does not exist on type 'string'. // Did you mean 'toUpperCase'? } These assertion signatures are very similar to writing type predicate signatures: function isString(val: any): val is string { return typeof val === "string"; } function yell(str: any) { if (isString(str)) { return str.toUppercase(); } throw "Oops!"; } And just like type predicate signatures, these assertion signatures are incredibly expressive. We can express some fairly sophisticated ideas with these. function assertIsDefined<T>(val: T): asserts val is NonNullable<T> { if (val === undefined || val === null) { throw new AssertionError( `Expected 'val' to be defined, but received ${val}` ); } } To read up more about assertion signatures, check out the original pull request. Better Support for never-Returning Functions As part of the work for assertion signatures, TypeScript needed to encode more about where and which functions were being called. This gave us the opportunity to expand support for another class of functions: functions that return never. The intent of any function that returns never is that it never returns. It indicates that an exception was thrown, a halting error condition occurred, or that the program exited. For example, process.exit(...) in @types/node is specified to return never. In order to ensure that a function never potentially returned undefined or effectively returned from all code paths, TypeScript needed some syntactic signal – either a return or throw at the end of a function. So users found themselves return-ing their failure functions. function dispatch(x: string | number): SomeType { if (typeof x === "string") { return doThingWithString(x); } else if (typeof x === "number") { return doThingWithNumber(x); } return process.exit(1); } Now when these never-returning functions are called, TypeScript recognizes that they affect the control flow graph and accounts for them. function dispatch(x: string | number): SomeType { if (typeof x === "string") { return doThingWithString(x); } else if (typeof x === "number") { return doThingWithNumber(x); } process.exit(1); } As with assertion functions, you can read up more at the same pull request. (More) Recursive Type Aliases Type aliases have always had a limitation in how they could be “recursively” referenced. The reason is that any use of a type alias needs to be able to substitute itself with whatever it aliases. In some cases, that’s not possible, so the compiler rejects certain recursive aliases like the following: type Foo = Foo; This is a reasonable restriction because any use of Foo would need to be replaced with Foo which would need to be replaced with Foo which would need to be replaced with Foo which… well, hopefully you get the idea! In the end, there isn’t a type that makes sense in place of Foo. This is fairly consistent with how other languages treat type aliases, but it does give rise to some slightly surprising scenarios for how users leverage the feature. For example, in TypeScript 3.6 and prior, the following causes an error. type ValueOrArray<T> = T | Array<ValueOrArray<T>>; // ~~~~~~~~~~~~ // error: Type alias 'ValueOrArray' circularly references itself. This is strange because there is technically nothing wrong with any use users could always write what was effectively the same code by introducing an interface. type ValueOrArray<T> = T | ArrayOfValueOrArray<T>; interface ArrayOfValueOrArray<T> extends Array<ValueOrArray<T>> {} Because interfaces (and other object types) introduce a level of indirection and their full structure doesn’t need to be eagerly built out, TypeScript has no problem working with this structure. But workaround of introducing the interface wasn’t intuitive for users. And in principle there really wasn’t anything wrong with the original version of ValueOrArray that used Array directly. If the compiler was a little bit “lazier” and only calculated the type arguments to Array when necessary, then TypeScript could express these correctly. That’s exactly what TypeScript 3.7 introduces. At the “top level” of a type alias, TypeScript will defer resolving type arguments to permit these patterns. This means that code like the following that was trying to represent JSON… type Json = | string | number | boolean | null | JsonObject | JsonArray; interface JsonObject { [property: string]: Json; } interface JsonArray extends Array<Json> {} can finally be rewritten without helper interfaces. type Json = | string | number | boolean | null | { [property: string]: Json } | Json[]; This new relaxation also lets us recursively reference type aliases in tuples as well. The following code which used to error is now valid TypeScript code. type VirtualNode = | string | [string, { [key: string]: any }, ...VirtualNode[]]; const myNode: VirtualNode = ["div", { id: "parent" }, ["div", { id: "first-child" }, "I'm the first child"], ["div", { id: "second-child" }, "I'm the second child"] ]; For more information, you can read up on the original pull request. - following: /** * @callback Job * @returns {void} */ /** Queues work */ export class Worker { constructor(maxDepth = 10) { this.started = false; this.depthLimit = maxDepth; /** * NOTE: queued jobs may add more items to queue * @type {Job[]} */ this.queue = []; } /** * Adds a work item to the queue * @param {Job} work */ push(work) { if (this.queue.length + 1 > this.depthLimit) throw new Error("Queue full!"); this.queue.push(work); } /** * Starts the queue if it has not yet started */ start() { if (this.started) return false; this.started = true; while (this.queue.length) { /** @type {Job} */(this.queue.shift())(); } return true; } } will currently be transformed into the following implementation-less .d.ts file: /** * @callback Job * @returns {void} */ /** Queues work */ export class Worker { constructor(maxDepth?: number); started: boolean; depthLimit: number; /** * NOTE: queued jobs may add more items to queue * @type {Job[]} */ queue: Job[]; /** * Adds a work item to the queue * @param {Job} work */ push(work: Job): void; /** * Starts the queue if it has not yet started */ start(): boolean; } export type Job = () => void; For more details, you can check out the original pull request. Build-Free Editing with Project References TypeScript’s project references provide us with an easy way to break codebases up to give us faster compiles. Unfortunately, editing a project whose dependencies hadn’t been built (or whose output was out of date) meant that the editing experience wouldn’t work well. In TypeScript 3.7, when opening a project with dependencies, TypeScript will automatically use the source .ts/ .tsx files instead. This means projects using project references will now see an improved editing experience where semantic operations are up-to-date and “just work”. You can disable this behavior with the compiler option disableSourceOfProjectReferenceRedirect which may be appropriate when working in very large projects where this change may impact editing performance. You can read up more about this change by reading up on its pull request. Uncalled Function Checks A common and dangerous error is to forget to invoke a function, especially if the function has zero arguments or is named in a way that implies it might be a property rather than a function. interface User { isAdministrator(): boolean; notify(): void; doNotDisturb?(): boolean; } // later... // Broken code, do not use! function doAdminThing(user: User) { // oops! if (user.isAdministrator) { sudo(); editTheConfiguration(); } else { throw new AccessDeniedError("User is not an admin"); } } Here, we forgot to call isAdministrator, and the code incorrectly allows non-adminstrator users to edit the configuration! In TypeScript 3.7, this is identified as a likely error: function doAdminThing(user: User) { if (user.isAdministrator) { // ~~~~~~~~~~~~~~~~~~~~ // error! This condition will always return true since the function is always defined. // Did you mean to call it instead? This check is a breaking change, but for that reason the checks are very conservative. This error is only issued in if conditions, and it is not issued on optional properties, if strictNullChecks is off, or if the function is later called within the body of the if: interface User { isAdministrator(): boolean; notify(): void; doNotDisturb?(): boolean; } function issueNotification(user: User) { if (user.doNotDisturb) { // OK, property is optional } if (user.notify) { // OK, called the function user.notify(); } } If you intended to test the function without calling it, you can correct the definition of it to include undefined/ null, or use !! to write something like if (!!user.isAdministrator) to indicate that the coercion is intentional. We owe a big thanks to GitHub user @jwbay who took the initiative to create a proof-of-concept and iterated to provide us with with the current version. // @ts-nocheck in TypeScript Files TypeScript 3.7 allows us to add // @ts-nocheck comments to the top of TypeScript files to disable semantic checks. Historically this comment was only respected in JavaScript source files in the presence of checkJs, but we’ve expanded support to TypeScript files to make migrations easier for all users. Semicolon Formatter Option TypeScript’s built-in formatter now supports semicolon insertion and removal at locations where a trailing semicolon is optional due to JavaScript’s automatic semicolon insertion (ASI) rules. The setting is available now in Visual Studio Code Insiders, and will be available in Visual Studio 16.4 Preview 2 in the Tools Options menu.. Function Truthy Checks As mentioned above, TypeScript now errors when functions appear to be uncalled within if statement conditions. An error is issued when a function type is checked in if conditions unless any of the following apply: - the checked value comes from an optional property strictNullChecksis disabled - the function is later called within the body of the if Local and Imported Type Declarations Now Conflict Due to a bug, the following construct was previously allowed in TypeScript: // ./someOtherModule.ts interface SomeType { y: string; } // ./myModule.ts import { SomeType } from "./someOtherModule"; export interface SomeType { x: number; } function fn(arg: SomeType) { console.log(arg.x); // Error! 'x' doesn't exist on 'SomeType' } Here, SomeType appears to originate in both the import declaration and the local interface declaration. Perhaps surprisingly, inside the module, SomeType refers exclusively to the imported definition, and the local declaration SomeType is only usable when imported from another file. This is very confusing and our review of the very small number of cases of code like this in the wild showed that developers usually thought something different was happening. In TypeScript 3.7, this is now correctly identified as a duplicate identifier error. The correct fix depends on the original intent of the author and should be addressed on a case-by-case basis. Usually, the naming conflict is unintentional and the best fix is to rename the imported type. If the intent was to augment the imported type, a proper module augmentation should be written instead. API Changes To enable the recursive type alias patterns described above, the typeArguments property has been removed from the TypeReference interface. Users should instead use the getTypeArguments function on TypeChecker instances. What’s Next? The final release of TypeScript 3.7 will be released in a couple of weeks, ideally with minimal changes! We’d love for you to try this RC and give us your feedback to make sure 3.7 works great for everyone. If you have any suggestions or run into any problems, don’t hesitate to open an issue on our issue tracker! Happy Hacking! – Daniel Rosenwasser and the TypeScript Team thanks so much for the time you invested in doing this. This was a great article, thanks for doing the tie-backs to javascript too. Definition according to the typescript official website: “TypeScript is a typed superset of JavaScript that compiles to plain JavaScript.” Typescript is starting with the same syntax as javascript. TypeScript is a superset of JavaScript, providing language features that build on those that are provided by the JavaScript specification. In the sections that follow, I demonstrate the most useful TypeScript features for Angular development, many of which I have used in the examples on this website. I was really excited reading at first, but the “I” as a prefix for interfaces was a turndown. I looked up for the community typescript conventions and guidelines and found this: typescript2020
https://devblogs.microsoft.com/typescript/announcing-typescript-3-7-rc/?utm_campaign=Week%20of%20React&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2020-34
refinedweb
2,956
56.76
We. Go ahead and open the project from where we left off, or download the completed project from last week here. I have also prepared an online repository here for those of you familiar with version control. Data System We’re going to want to change a turn, and the way we will accomplish it is by changing the relevant value of a match. This means that before I can actually “change turns”, I’m going to need an instance of a match to exist somewhere. We can maintain a bit of organization by keeping all persistent data in a single location. In this case, that location will be a system whose job it will be to also handle saving and loading that data in the future. We don’t need to worry about all of that yet – let’s just add an instance of the match for now. public class DataSystem : Aspect { public Match match = new Match(); } public static class DataSystemExtensions { public static Match GetMatch (this IContainer game) { var dataSystem = game.GetAspect<DataSystem> (); return dataSystem.match; } } As you can see it is quite a simple setup. The key points are that the system inherits from “Aspect” so that it can be added to the same container as our other systems, and that it holds an instance of a match. Because the data held by the data system will need to be accessed so frequently, I also added a simple extension method to the “IContainer” so that it would feel like the container holds the match directly. It wasn’t a necessary step, but it could save a line or two of code in enough places that it felt like a nice feature. If you like the idea of the extension class we used here, you may also consider adding it in many more places. For example, the Action System will be used frequently enough that you may want to expose the abiltiy to perform actions and reactions as though they were also available directly from the container: public static class ActionSystemExtensions { public static void Perform (this IContainer game, GameAction action) { var actionSystem = game.GetAspect<ActionSystem> (); actionSystem.Perform (action); } public static void AddReaction (this IContainer game, GameAction action) { var actionSystem = game.GetAspect<ActionSystem> (); actionSystem.AddReaction (action); } } Change Turn Action The purpose of our game action model is to provide context to some system that will actually apply the logic to the game model(s) as necessary. In order for a system to know how to change a turn, I technically don’t need any extra information – or at least I wouldn’t if it is safe to assume that control will ALWAYS pass from one player to the next. In truth, I think that is probably the way it actually is in Hearthstone. Regardless, I decided to demonstrate a slightly more flexible action. We will specify the next player’s index in the action itself. This gives notificaiton observers a chance to modify whose turn will actually come next. If we were playing something like Uno, then playing a “skip” or “reverse” card could change the normal flow of player turns. In the same way, I can imagine special cards in our CCG that might also allow a player to take another turn – though they would need to be used sparingly because I would imagine it to be very powerful. If you’re curious, the unit testing code includes a test to override the default value and allow a player an extra turn. public class ChangeTurnAction : GameAction { public int targetPlayerIndex; public ChangeTurnAction (int targetPlayerIndex) { this.targetPlayerIndex = targetPlayerIndex; } } Thanks to everything we packed into the base class, there is almost nothing here. Technically I didn’t even have to include the custom constructor. We really just need a public field to hold the target player index. We assign it the default value for the next player, but any other system may change the value via a card’s special ability, etc. before the action actually gets performed. Observers When running our new game action through the action system, notifications for its preparation and performance phases will be posted. It will be the job of another system to observe each notification and implement any necessary logic. While the notification code itself is super simple to use, it also requires a degree of discipline. Adding a notification observer is easy and can be done just about anywhere, including in a constructor. The removal of the notification observer is not as easy. You can’t, for example, rely on a destructor for this purpose. This is because the notification center maintains strong pointers which will prevent the object from being destroyed. If we were working with MonoBehaviour there would be obvious locations to both add and remove observers such as Awake / Destroy, or OnEnable / OnDisable. Although I don’t want to use a MonoBehaviour, I don’t have anything against copying some of the patterns I actually liked. In my opinion we can even improve on the idea. The MonoBehaviour uses reflection to figure out what methods are actually implemented, which means that you don’t get auto-complete in your code editor when typing out your method names. You may forget if it was supposed to be “OnEnable” vs “Enable” or “Start” vs “OnStart”. Instead of reflection, we can have our systems implement an interface for whichever features we want to support. Take a look at a sample which gives us “Awake” functionality: public interface IAwake { void Awake(); } public static class AwakeExtensions { public static void Awake (this IContainer container) { foreach (IAspect aspect in container.Aspects()) { var item = aspect as IAwake; if (item != null) item.Awake (); } } } So now, any system that implements “IAwake” can have an “Awake” method called. I created an extension method of a container to loop through all of its aspects checking for the implementation of this interface, and invoking it if found. In practice, after building the game container through some sort of factory method, I would next use this extension. I would know that all of the systems already existed before running “Awake” in case one of the systems wanted to cache a reference to another, or connect notifications, etc. Since I mentioned wanting a way to also remove notifications, let’s mimic the “Destroy” functionality of a MonoBehaviour too: public interface IDestroy { void Destroy(); } public static class DestroyExtensions { public static void Destroy (this IContainer container) { foreach (IAspect aspect in container.Aspects()) { var item = aspect as IDestroy; if (item != null) item.Destroy (); } } } As you might imagine, if you continued creating a new interface for every single method, then your class definitions would start to get a bit unwieldly. We can help by making use of interface inheritance. If I know that I want both an “Awake” method for adding notifications and a “Destroy” method for removing notifications, then I could define a new interface that requires both like so: public interface IObserve : IAwake, IDestroy { } And now any class which implements the single “IObserve” interface will have both an “Awake” and “Destroy” method. Match System Let’s do a quick recap. We have a system that holds our game models (at the moment this is just a single match object). We have created a game action with a context for modifying a game model. The game action will be passed through the action system which will also cause notifications to be posted. A system is supposed to “observe” these notifications and actually apply the logic. This last step is what we are creating now. Since the action is modifying a match object, I decided to create a match system to handle the process. public class MatchSystem : Aspect, IObserve { public void ChangeTurn () { var match = container.GetMatch (); var nextIndex = (1 - match.currentPlayerIndex); ChangeTurn (nextIndex); } public void ChangeTurn (int index) { var action = new ChangeTurnAction (index); container.Perform (action); } public void Awake () { this.AddObserver (OnPerformChangeTurn, Global.PerformNotification<ChangeTurnAction> (), container); } public void Destroy () { this.RemoveObserver (OnPerformChangeTurn, Global.PerformNotification<ChangeTurnAction> (), container); } void OnPerformChangeTurn (object sender, object args) { var action = args as ChangeTurnAction; var match = container.GetMatch (); match.currentPlayerIndex = action.targetPlayerIndex; } } Note that the class inherits from “Aspect” – this is what allows it to be connected to the same container as the “DataSystem” and “ActionSystem” so that all three systems can work together. It also implements the “IObserve” interface – this is what let’s us know to invoke the “Awake” and “Destroy” methods to give the class a chance to add and remove its notification observers. I provided a public “ChangeTurn” method that initiates the process of changing a turn (but note that the turn wont actually change until the action system goes through its phases). You might call this method after a user presses a button to end his turn. I also provided an overloaded version which accepts an index. This may not be necessary in a pure Hearthstone clone, but could be useful in the Uno example from before. Once the perform phase runs, and the notification is observed, we will use the “OnPerformChangeTurn” handler to actually apply the logic of changing the turn based on whatever is now available in the action’s context. Demo There isn’t a “real” demo for this lesson outside of being able to look at the various unit tests that I have included in the project for you. You can always run them and see that they pass, but admittedly it’s not terribly exciting to see a green checkmark appear. This is largely due to the fact that in a way, we have really only implemented “half” of the action – the data and systems are working. The next half to implement is some sort of “viewer” to display to the user what is happening behind the scenes. We’ll add support for this next, so stay tuned! Summary In this lesson we tied together several systems to demonstrate a simple task – the ability to change turns. We actually created a placeholder for the DataSystem which holds the match, we created our first Game Action subclass, and we also created a Match System to handle the new action. We also added some more interfaces related to the observation of notifications so that it would be easy to both add and remove our notificaiton handlers at appropriate times.! 4 thoughts on “Make a CCG – Changing Turns” Really cool tutorials. Btw I don’t know if you are aware, but your design pattern looks a lot like ECS. You should take a look at Entitas for Unity, I think you will like it: Cheers Thanks, I’m glad you like them. I am familiar with ECS – I used a variant by Adam Martin in my previous project – the unofficial pokemon board game. After being influenced by his rather strong opinions on the topic, some of the design decisions I saw in Entitas didn’t seem to match well so I may not have given it a fair chance. Perhaps I’ll take another look later. Thank you for posting such quality work, i have a question about your Tactics Project is it appropriate to ask here or is there a better place for such questions? Glad you liked it. Feel free to ask questions here or on my forum. I personally think its a little easier to understand detailed Q&A on the forum but simple questions are fine here.
http://theliquidfire.com/2017/09/18/make-a-ccg-changing-turns/?replytocom=1249
CC-MAIN-2020-16
refinedweb
1,890
51.89
Most modern languages have a concept of packages, wherein related classes are stored together. PHP sadly doesn’t have a similar concept. For example in Java we can use the following line to imports all classes from the java.awt.event package. But in the spirit of ‘programming into the language’, what we can do is try to simulate a ‘package-like’ concept in PHP. Take the following directory structure. So now if we want to import the ‘login.php’ file in our code we must be able to do the following with a newly created ‘import’ function: This will include the ‘login.php’ file from the ‘com/codediesel/Security’ directory. To include all the files from the ‘Security’ directory we can do the following: As you can see that this is actually quite different from what the actual package concept implies, which is to import related classes. Anyways, taking it as a conceptual idea only the code for the import function is shown below: Please note that this is just an idea. Detailed error handling is not included to keep the code conceptually simple to understand. It would be fun if readers could bounce around some thoughts. Now the Problems The first main problem is that this is not exactly what the package concept implies. What we are essentially doing is dressing-up the ‘include’ concept in new clothes. The second is that for multiple file inclusion as shown above, we cannot predict in which sequence the files will be included, which can raise some dependency issues. Including a large number of files can incur a small performance hit. This can be better handled if developed as a PECL extension. This problem was solved some time ago in PHP, and in a manner that can speed up the use of a potential hierarchy of classes, rather than slow it down by including files that are not being used. It’s AutoLoading, as demonstrated with such tools as Zend_Loader, part of Zend Framework (though it can be used separately to the rest of it) I’m aware of the Autoload functionality in PHP5; what I wanted to do was to somehow replicate the syntax offered by other languages. Sameer, I think the question Topbit raises (though it was not stated this plainly), is why is this necessary or even desirable, given that we already have Autoload. I’m not saying it is or isn’t, I’d just like to see if there’s an answer. I’m curious, but I need to understand the utility of your concept. To put it plainly, I just needed to bring the Java package concept as-it-is to php, it doesn’t have to have any immediate utility in the daily practices of a programmer. Autoload is a nice thing, albeit somewhat retrofitted into php. Anyways its just an idea, if it doesn’t work than so be it. Doctrine 2.0 has class loader for PHP 5.3 namespace separator “\” . It is also compatible with Zend/Pear naming conventions. look at file Doctrine\Common\ClassLoader.php I also think now you can implement the package structure with namespaces (Doctrine\Entity\Manager). Though, it was already present as a convention for class names (Zend_Auth_Adapter for instance). You may also want to look int PHAR packages, which have been around for a while as a PECL extension, but the functionality is baked in to PHP 5.3 by default now. Granted the concept is more or less jars for PHP, but you can use them to bundle similar classes together and load them in one fell swoop if you’re clever about it. Also, it might also be worth looking into SPL autoloaders rather than the vanilla __autoload magic if you’re not familiar with them. You could write specific functions that load classes from well known file system hierarchies, thus eliminating the need for an “import” kinda thing, but still being able to maintain the java-like package structure you enjoy. The nice thing about SPL auto loaders is that you can define as many as you want, and PHP will execute all the autoloaders you’ve defined and registered before throwing up a class doesn’t exist error (to put it crudely). They’re also faster than regular ol’ __autoload, so there’s no huge penalty (outside filesystem access) for using them That’s my 2 cents Hi, I had my own library that has lots of features from Java. And “import” is one of primary features. In real life use, I’ve split to different method as follow : class Tine { static function importClass( $class ) { $class_path = trim( strtr( $class, ‘.*’, ‘/ ‘ ) ); $class_name = strtolower( basename( $class_path ) ); $class_path = TINE_CLASS_PATH . ‘/’ . $class_path . ‘.php'; if ( file_exists( $class_path ) && is_file( $class_path ) && !class_exists( $class_name ) ) { require( $class_path ); } } static function importPackage( $package ) { $class_path = TINE_CLASS_PATH . ‘/’.trim( strtr( $package, ‘.*’, ‘/ ‘) ); self::importFiles( $class_path ); } static function importFiles( $folder, $create_instance = false ) { if ( file_exists( $folder ) && is_dir( $folder ) ) { $append = substr( $folder, -1, 1 ) == ‘/’ ? ” : ‘/'; $handle = opendir( $folder ); if ( is_resource( $handle ) ) { while ( true == ( $file = readdir( $handle ) ) ) { $full_file = $folder . $append . $file; if ( is_file( $full_file ) ) { $res = null; $ext = substr( $file, -4, 4 ); $class_name = substr( $file, 0, -4 ); if ( 0 == strcasecmp( $ext, ‘.php’ ) ) { $class_exists = class_exists( $class_name ); if ( !$class_exists ) { require( $folder . $append . $file ); } if ( $create_instance && $class_exists ) { new $res[1]; } } } } closedir( $handle ); } return; } } static function importFolder( $folder, $create_instance = false ) { self::importFiles( $folder, $create_instance ); } static function importFile( $file, $once = true ) { if( $once ) require_once( $file ); else require( $file ); } static function import($class) { $class = str_replace( ‘ ‘, ”, $class ); $pos = strpos( $class, ‘/’ ); if ( $pos === false ) { if ( $class{ strlen( $class ) – 1 } == ‘*’ ) self::importPackage( $class ); else self::importClass( $class ); } else { if ( is_dir( $class ) ) self::importFiles( $class ); else self::importFile( $class ); } } } Most of application developped for my clients, use more than 100 classes. So to gain performance, I’ve found a little way to load faster : As the main goal is to load fewer files, i use a import tracker to generate a file containing most imported files, something like: static function trackFile( $path ) { if ( defined('TINE_CLASS_TRACKER') && file_exists( $path ) ) { global $trackers; $trackers[ $path ] = md5_file( $path ); } } I have a similar function and I am happy that I am not the only weirdo that uses such functionality. I hate puting the namespece in a class names so it is OK to do have import() instrument. I also have a core folder where I put basic classes which are loaded default bu autoload
http://www.codediesel.com/php/simulating-packages-in-php/
CC-MAIN-2017-04
refinedweb
1,054
60.65
In today’s Programming Praxis exercise, our task is to define functions to map, filter, fold and foreach over records in text file databases, for which we wrote parser’s in the previous exercise. However, due to the way we wrote the functions last time, there really isn’t much point in doing so. Since the parsers already return a list of records (albeit wrapped in an Either and an IO), you can simply use the map, filter, foldl and mapM_ functions from the Prelude to process them. I suppose that in the Scheme solution it makes a little more sense, since there the parsers only return one record at a time, but even then I’d personally just write a function that returns all the records in a file and then process them like any other list, since it saves you from having to duplicate a lot of existing functions. Additionally, it makes function composition much easier, as the database-specific functions cannot be composed. Of the four functions mentioned, the only one that warrants a function in Haskell is foreach (or in Haskell terminology, mapM_), since it requires doing something with the potential parse error: dbMapM_ :: Monad m => (a -> m b) -> Either l [a] -> m () dbMapM_ = either (const $ return ()) . mapM_ The other three can just be fmapped over the result of readDB. I won’t bore you with the implementations for map, filter and foldl, since they would be largely identical to the ones found in the Prelude. main :: IO () main = do db <- readDB (fixedLength [5,3,4]) "db_fl.txt" print $ map head <$> db print $ foldl (const . succ) 0 <$> db print $ filter (odd . length) <$> db dbMapM_ print db Tags: bonsai, code, Haskell, kata, praxis, programming October 22, 2010 at 8:09 pm | Laziness helps you. In Haskell, when you return a list of records from the port, you get the list one record at a time. If I were to write the equivalent function in Scheme, I would have to store the entire list — that would be inconvenient, and for a very large file, might not even be possible. October 22, 2010 at 10:24 pm | Granted, but even in C# or Python you could make a generator function that lazily returns records. I would imagine you can do the same thing in Scheme. Just because a language isn’t lazy by default doesn’t mean you can’t do lazy evaluation. Besides, by that metric your current map and filter implementations are already inconvenient, since mapping the identity function or filtering with an always true predicate gives you the entire list of records just the same.
https://bonsaicode.wordpress.com/2010/10/22/programming-praxis-text-file-databases-part-2/
CC-MAIN-2017-30
refinedweb
439
64.44
Windows Application Project. Add a new Form to the project and name it InformationForm. For more information, see How to: Add Windows Forms to a Project. From the Toolbox, drag a TableLayoutPanel onto the form. Use the smart tag that appears as an arrow next to the control to add a third row to the table, and use the mouse to resize the rows so that all three are equal. Add a Label to each table cell in the first column, and a TextBox to each cell in the second column. The Label controls should be named, from top to bottom, firstNameLabel, lastNameLabel, and emailLabel; the TextBox controls should be named firstNameText, lastNameText, and emailText. When you are finished, add a Button control. Name it okButton, and change the Text property to OK. Add a new Class file named UserInformation to the project. Add the public qualifier to the class definition to make this class visible outside of its namespace, and; } } } } Go back to the code for InformationForm, and add a UserInformation property. Handle the Validated event on each of the TextBox controls so that you can update the corresponding property on UserInformation whenever one of the values change.; } } Add handlers for the Validated event to each of the TextBox controls on your form, so that the new value of these controls is assigned to UserInformation whenever the user changes them. Select Form1 in Visual Studio. Add a Button to the form, and changes its Name property to showFormButton. Double-click the button to add an event handler that calls the dialog box and displays the
https://msdn.microsoft.com/en-us/library/cakx2hdw(v=vs.85).aspx
CC-MAIN-2016-50
refinedweb
266
63.09
#include "feature/dircommon/voting_schedule.h" #include "core/or/or.h" #include "app/config/config.h" #include "feature/nodelist/networkstatus.h" #include "feature/nodelist/networkstatus_st.h" Go to the source code of this file. This file contains functions that are from the directory authority subsystem related to voting specifically but used by many part of tor. The full feature is built as part of the dirauth module. Definition in file voting_schedule.c. Frees a voting_schedule_t. This should be used instead of the generic tor_free. Definition at line 134 of file voting_schedule.c. Return the start of the next interval of size interval (in seconds) after now, plus offset. Midnight always starts a fresh interval, and if the last interval of a day would be truncated to less than half its size, it is rolled into the previous interval. Definition at line 30 of file voting_schedule.c. References tor_gmtime_r(), and tor_timegm(). Set voting_schedule to hold the timing for the next vote we should be doing. All type of tor do that because HS subsystem needs the timing as well to function properly. Definition at line 182 of file voting_schedule.c.
https://people.torproject.org/~nickm/tor-auto/doxygen/voting__schedule_8c.html
CC-MAIN-2019-39
refinedweb
188
61.93
Python Programming, news on the Voidspace Python Projects and all things techie. Fun with Unicode, Latin-1 and a C1 Control Code Unicode is a rabbit-warren of complexity; almost fractal in nature, the more you learn about it the more complexity you discover. Anyway, all that aside you can have great fun (i.e. pain) with fairly basic situations even if you are trying to do the right thing. This particular problem was encountered by Stephan Mitt, one of my colleagues at Comsulting. I helped him find the solution, and with a bit of digging (and some help from #python-dev) worked out why it was happening. We receive data from customers as CSV files that need importing into a web application. The CSV files are received in latin-1 encoding and we decode and then iterate over them to process a line at a time. Unfortunately the data from the customers included some \x85 characters, which were breaking the CSV parsing. One of the problems with the latin-1 encoding is that it uses all 256 bytes, so it is never possible to detect badly encoded data. Arbitrary binary data will always successfully decode: >>> data = ''.join(chr(x) for x in range(256)) >>> data.decode('latin-1') u'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f...' If you iterate over a standard file object in Python 2 (i.e. one that reads data as bytestrings) then you iterate over it a line at a time. This splits lines on carriage returns (\x0D) and line feeds (\x0A). If you're on Windows then the sequence \x0D\x0A (CRLF) signifies a new line. If you're trying to do-the-right-thing, and decode your data to Unicode before treating it as text, then you might use code a bit like the following to read it: import codecs handle = codecs.open(filename, 'r', encoding='latin-1') for line in handle: ... This was the cause of our problem. When decoding using latin-1 \x85 is transcoded to u'\x85', which Unicode treats as a line break. So if your source data has \x85 embedded in it, and you are splitting on lines, where the lines break will be different depending on if you are using byte-strings or Unicode strings: >>>>> d.split() ['foo\x85bar'] >>> u = d.decode('latin-1') >>> u u'foo\x85bar' >>> u.split() [u'foo','bar'] This could still be a pitfall in Python 3, where all strings are Unicode, particularly if you are porting an application from Python 2 to Python 3. Suddenly your data will behave differently when you treat it as Unicode. The answer is to do the split manually, specifying which character to use as a line break. The problem isn't restricted to \x85. The Unicode spec on newlines shows us why. \x85 is referred to by the acronym NEL, which is a C1 Control Code: NEL Next Line Equivalent to CR+LF. Used to mark end-of-line on some IBM mainframes. In fact NEL belongs to a general class of characters known as Paragraph Separators (Category B). This category includes the characters \x1C, \x1D, \x1E, \x0D, \x0A and \x85. Splitting on lines will split on any of these characters, which may not be what you expect. It certainly wasn't what we expected. For us the solution was simple; we just strip out any occurence of \x85 in the binary data before decoding. Note Marius Gedminas suggests that the data is probably encoded as Windows 1252 rather than Latin-1. He is probably right. There are some interesting notes on Unicode line breaks in this Python bug report: What is an ASCII linebreak?. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2010-01-07 12:42:27 | | Categories: Python, Work, Hacking Tags: Unicode, latin-1, encoding Current State of Unladen Swallow (Towards a Faster Python) I'm helping to organise the Python Language Summit that precedes the PyCon Conference this year. One of the first topics we'll be discussing is "Python 3 adoption and tools and the Python language moratorium" which will include representatives of the major Python implementations telling us the current state of their implementation and their plans or progress for supporting Python 3. I was fortunate today to exchange emails with Collin Winter, one of the core developers of Unladen Swallow on this topic. He gave me a sneak preview of what he will have to say at the summit. By way of introduction, Unladen Swallow is a project sponsored by Google to speedup Python. In particular it uses the LLVM (Low Level Virtual Machine) to provide a JIT (Just in Time compiler). Google use Python a great deal and so have a direct commercial interest in making it faster. Shortly after the first release Unladen Swallow was put to work serving YouTube traffic. The Unladen Swallow team see the project as a branch of CPython (the standard and reference implementation of the Python programming language) and their goal is to merge their changes back into Python once it is complete. Here is what Collin had to say: Brief summary: Unladen Swallow is currently focused on the process of merging into CPython's 3.x line (PEPs, patches, etc). We've been focused on pushing all our necessary changes into upstream LLVM in time for LLVM's 2.7 release, so that CPython 3.x can be based on that release. We've met our goals of maintaining pure-Python and C extension compatibility while speeding up CPython, and have created a platform for future development that we believe can continue to yield increasing performance for years to come. As for py3k support, Unladen Swallow is currently based on CPython 2.6.1. We will seek to merge our changes into Python's 3.x line exclusively (without backporting to 2.x), updating our patches as necessary to correct for the 2.x->3.y skew. Once merger is completed, Unladen Swallow will cease to exist as an independent project. (Emphasis added by me.) I also asked Collin about whether the merge back into Python woould be difficult, in particular because LLVM hasn't been well tested on Windows in the past and the LLVM is also written in C++ whereas the rest of CPython is all C. This potentially has ABI compatibility issues: Unladen currently builds and passes its tests on Windows (or did the last time I checked), so I don't think that will be an issue. The addition of C++ to the codebase will be more contentious, but the C++ stuff is restricted to the JIT-facing internals; the rest of the implementation is still straight C. We hope that will mitigate any concerns. "Impact on CPython Development" is a fairly lengthy section of the merger PEP I'm writing. So, it looks like Unladen Swallow already offers a speedup that the team are happy with. I know there have been some interesting improvements recently, like this one that inlines certain binary operations. Additionally the merge with CPython is going to happen 'soon' (i.e. a PEP is being written) and it is likely to happen on the Python 3 branch. Perhaps that will give the Python community an incentive to make the jump... NOTE: Jesse Noller also has a blog entry on this topic: Unladen Swallow: Python 3's Best Feature. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2010-01-07 01:35:39 | | Categories: Python, Projects Tags: Unladen Swallow, performance, Py3k New Year's Python Meme This is the blog entry I had nearly finished when I started messing around with Mock on Saturday. Started by Tarek Ziade, five short questions on you and Python in 2009 (or in this case me and Python)... What's the coolest Python application, framework or library you have discovered in 2009? One of the biggest things that happened in my life in 2009 was leaving Resolver Systems to become a freelance developer working for a German firm Comsulting.de. I'm still working with IronPython, but now developing web applications with Django on the server and using IronPython in Silverlight on the client. It's great fun and although I'd used both Silverlight and Django a bit previously I'm now using working with them full time. What new programming technique did you learn in 2009? At Resolver Systems we all took responsibility for architectural decisions. It was a great team to be part of and a great way to work. In my current project I've largely been responsible for building the application myself, although my colleague who is an excellent designer has recently been able to join me in the coding. That means I've made the architectural decisions for the application. This has stretched me and structuring large applications is something I want to explore more. What's the name of the open source project you contributed the most in 2009? What did you do? Well, in 2009 I became a Python core-developer and the maintainer of unittest. The work I've done on unittest is definitely the most valuable contribution to open-source in 2009. That reminds me, there are a bunch of open tickets with my name on that I really ought to be looking at instead of doing this... Other than that I worked on a bunch of little projects of my own. This is probably the project I'm most proud of: a Python interpreter and tutorial that runs in the browser with Silverlight. As Silverlight comes from Microsoft, and last I heard was installed on around 30% of the world's browsers, Try Python isn't a runaway success but I think it is very cool. Moonlight is now out, so in theory the site could work on Linux machines - but there is an issue with the version of IronPython I use. Hopefully I'll get around to updating the site soon and will also add an IronPython tutorial to the Python tutorial. A Python configuration file reader and writer that is easy to use but with about a gazillion extra features not found in ConfigParser. This is the most widely used code I've ever released, but I don't use it much myself these days. Thankfully it takes little maintaining, however I have done a bunch of work on version 4.7 which is just waiting for me to pull my finger out and release. A simple mocking library for testing Python code, that makes a great companion to unittest. In my day job I'm now focusing on integration testing and not doing much unit testing, so I don't use Mock as much as I used to and it hasn't got the attention it deserves. It was nice to finally add support for magic methods so that you can mock numeric types, containers and so on. I also wrote a lot of articles on IronPython and supporting example code to go with them. What was the Python blog or website you read the most in 2009? Like many of the other folk who answered these questions, I tend to keep in touch with Python news through Planet Python (in fact this year I sort of became responsible for some of the administration of the Planet when I joined the Python webmaster team). I enjoy a lot of the bloggers on the Planet and find it invaluable for keeping up to date with the Python world. There really are a lot of great bloggers contributing to the Planet so I'm only going to call out one: Jacob Kaplan-Moss. A great blogger, both fun and on the ball technically. Of course like the rest of us he needs to pull his finger out and blog more often. In fact in 2009 I went old school and (re)discovered the joys of IRC. I'm often on #python-dev and various other Python related channels. Twitter is also still growing and I've had a lot of fun and learned a lot from the many tech folk I follow there. What are the three top things you want to learn in 2010? I'd like to learn more about web programming, in particular I want to get deeper into Django (and perhaps Pinax) and properly learn Javascript. There are lots of programming languages I'd like to learn (C so I can contribute to CPython and just because it is everywhere, Haskell so I can get functional enlightenment [1], maybe F# so I can achieve the same thing but in a language that might actually be useful, Erlang because all the cool kids are doing it and it seems to have the most practical approach to concurrency of the 'modern' languages, Lisp to see what all the fuss is about and probably a load more languages). In reality I'll only learn a programming language that I actually need to use, so I think Javascript is the programming language I'm most likely to have the opportunity to really dive into. Although I've tinkered with Javascript (who hasn't) I haven't fully appreciated what it means to idiomatically program with a prototype based language so it is definitely of value. There are a huge number of libraries and frameworks I'd love to learn, including Twisted, multiprocessing and other web frameworks. I'd also like to do mobile application development either for the iPhone or Android. That would give me a reason to use another language, but I have to say that Objective-C is more appealing than Java. I doubt I'll find time to do any of this in 'hobby-time', so hopefully they'll come up in a work context. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2010-01-04 14:31:39 | | Categories: Python, Fun, Projects Tags: meme, Happy New Year Python Surprises In the last few days I've run into several things I didn't know about Python. Not necessarily bad or wrong, just new to me. >>> object.__new__(int) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object.__new__(int) is not safe, use int.__new__() The same happens for pretty much all the built-in types. I don't think you can achieve this effect from pure-Python code, which is why it is impossible (I think) to write a real singleton in pure-Python. From any singleton instance you can always do this: object.__new__(type(the_singleton)) Anyway, next surprise: >>> class Meta(type): ... __slots__ = ['foo'] ... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Error when calling the metaclass bases nonempty __slots__ not supported for subtype of 'type' This was annoying at the time, but caused me to find a better way to achieve what I wanted anyway. These first two show that despite the 'grand-merger' of Python 2.2 you can't treat the built-in types exactly as if they were user-defined classes. The next one I actually ran into a while back: >>> @EventHandler[HtmlEventArgs] File "<stdin>", line 1 @EventHandler[HtmlArgs] ^ SyntaxError: invalid syntax This one is annoying. In IronPython EventHandler[HtmlEventArgs] would return a typed event handler for wrapping a function with. Decorator syntax would be very convenient but the only valid syntax is a name followed by optional parentheses and arguments - not any arbitrary expression. The relevant part of the grammar is: decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE This grammar not only prevents indexing but means you can't (for example) define lambda decorators. All it would take is a grammar change and these could work, no actual code would need to be written in support. The reason that Guido didn't allow it is that he didn't want people writing code like: @(F((foo + bar / 3 )) / [x**2 for x in frobulator]) def function(): ... Guido did agree that the rules could be relaxed (here is the python-ideas thread where it was discussed), but then the language moratorium came into effect. The final surprise was that default object equality comparison is implemented inside the Python runtime instead of there being a default implementation in object. In fact object() instances don't even have the equality / inequality methods (__eq__ / __ne__). >>> object().__eq__(object()) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'object' object has no attribute '__eq__' However, if you look up __eq__ on the type, as you might if you were trying to delegate up to the default implementation that doesn't exist, then something weird happens: >>> object.__eq__(object(), object()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: expected 1 arguments, got 2 >>> object.__eq__ <method-wrapper '__eq__' of type object at 0x141fc0> When you look up __eq__ on object (the type rather than an instance) then you get the __eq__ method of its metaclass (type) bound to object which is an instance of type. As this is a bound method it only takes one argument and calling it with two arguments causes a TypeError. In fact there is nothing special about __eq__ here, I just didn't realise that member resolution on types would check the metaclass after checking the base classes: >>> class Meta(type): ... X = 3 ... >>> class Something(object): ... __metaclass__ = Meta ... >>> Something.X # from the metaclass 3 >>> Something.X = 4 # set on the type >>> Meta.X 3 >>> class SomethingElse(Something): pass ... >>> SomethingElse.X # fetched from base class not the metaclass 4 Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2010-01-04 12:16:12 | | Categories: Hacking, Python Tags: decorators, grammar, singletons Mocking Magic Methods and Preserving Function Signatures Whilst Mocking So, I'm most of the way through one blog entry, my tax return is due, I have a PyCon talk to write and I have a release of ConfigObj [1] just waiting for me to finish updating the docs. Naturally then I should mess around implementing new features for Mock. These particular features were inspired by an email from Mock user Juho Vepsalainen who had a particular problem with Mock. In case you aren't familiar with it, Mock is a simple mocking library for unit testing. Mock makes creating mock objects, and patching out implementations with mocks at runtime, trivially easy. I've spent a chunk of time today implementing a module that extends Mock to add new features. Eventually they will become part of Mock itself, but that would require a new release and tedious things like writing documentation: Note I've already improved the code in extendmock and merged it into the main mock module. No need for a special MagicMock class any more. You can use mock.py from subversion or wait for the release of version 0.7. To implement a lot of functionality (mocking any class and recording how they are used), mocks are instances of the Mock class. This can be a problem for code that uses introspection to determine if something is a function or not, or introspects the function signature. If you mock a function or method it will be replaced with a callable object with the signature (*args, **kwargs). This also means that code which is called incorrectly won't raise an error, you will only catch this in your tests if you specifically check how the object is called (which you usually will because that's the point of mocking it out - but still). A solution to all these problems is the mocksignature function. This takes a function (or method) and a mock object. It creates a wrapper function with the same signature as the function you pass in. When called this wrapper function calls the mock, so instead of directly patching a mock to replace a function or method you use the function returned by mocksignature. Code that introspects the function you are patching out will still work. Here's an example: from mock import Mock, patch from extendmock import mocksignature from some_module import some_function mock = Mock() mock_function = mocksignature(some_function, mock) @patch('some_module.some_function', mock_function) def test(): from some_module import some_function some_function('foo', 'bar', 'baz') test() mock.assert_called_with('foo', 'bar', 'baz') To make it more convenient to use I will build support for mocksignature into the patch decorator. You can also use mocksignature on instance methods: from mock import Mock from extendmock import mocksignature class Something(object): def method(self, a, b): pass s = Something() mock = Mock() mock_method = mocksignature(s.method, mock) s.method = mock_method s.method(3, 4) mock.assert_called_with(3, 4) A limitation of mocksignature is that all arguments are passed to the underlying mock by position. If there are default values they will be explicitly passed in. Keyword arguments are only collected if the function uses **kwargs. See the tests for more details. The important fact is that the function signature is unchanged: import inspect from extendmock import mocksignature from mock import Mock def f(a, b, c='foo', **kwargs): pass mock = Mock() new_function = mocksignature(f, mock) assert inspect.getargspec(f) == inspect.getargs(new_function) The limitation on keyword arguments sounds confusing (certainly the way I expressed it above), so it's easier to demonstrate in practise with the call_args attribute: >>> from mock import Mock >>> from extendmock import mocksignature >>> >>> mock = Mock() >>> >>> def f(a=None): pass ... >>> f2 = mocksignature(f, mock) >>> f2() <mock.Mock object at 0x441d70> >>> mock.call_args ((None,), {}) >>> mock.assert_called_with(None) >>> Even though we passed no arguments in, the argument with the default value (a) is called as if None was passed in explicitly. This affects the way you use assert_called_with when using Mock and mocksignature in concert. You can still use mocksignature with functions that collect args with *args and **kwargs: >>> from extendmock import mocksignature >>> from mock import Mock >>> >>> def f(*args, **kw): pass ... >>> mock = Mock() >>> mock.return_value = 3 >>> f2 = mocksignature(f, mock) >>> f2(1, 'a', None, foo='fish', bar=1.0) 3 >>> mock.call_args ((1, 'a', None), {'foo': 'fish', 'bar': 1.0}) >>> Another problem with Mock is that it currently doesn't support mocking out the Python protocol methods (like __len__, __getitem__ and so on). extendmock contains a new class that adds magic suport to Mock: MagicMock. Here's an example of how you use it: from extendmock import MagicMock mock = MagicMock() _dict = {} def getitem(self, name): return _dict[name] def setitem(self, name, value): _dict[name] = value def delitem(self, name): del _dict[name] mock.__setitem__ = setitem mock.__getitem__ = getitem mock.__delitem__ = delitem self.assertRaises(KeyError, lambda: mock['foo']) mock['foo'] = 'bar' self.assertEquals(_dict, {'foo': 'bar'}) self.assertEquals(mock['foo'], 'bar') del mock['foo'] self.assertEquals(_dict, {}) You mock magic methods by assigning a function (or a mock object) to the mock instance. Magic methods are looked up on the object class by the Python interpreter. MagicMock has all the magic methods implemented in a way that checks for corresponding instance variables, with sensible behaviour if the instance variable doesn't exist. However, the presence of these magic methods on the class could break some duck-typing (if it checks for the presence or absence of these methods), so I would rather have MagicMock be a separate class instead of integrating this into the Mock class. On the other hand there is no reason why I can't move MagicMock into the mock module next time I do a release. For all magic methods you mock in this way you have to include self in the function signature. I might change this at a future date, so be warned this an experimental implementation. Also note that calls to mocked magic methods aren't recorded in method_calls and don't use object wrapping - all things that may change in the future. One reason that some users have been requesting magic method support is for mocking context managers. Unfortunately __enter__ and __exit__ are looked up differently from the other magic methods in Python 2.5 and 2.6 (they aren't looked up on the class first but on the instance first like normal members). This makes the following technique still the correct way to mock the with statement. Note This is no longer true in the magic method support now in trunk. You mock __enter__ and __exit__ in exactly the same way as you do other magic methods. You can also mock magic methods by assigning a Mock instance to the method you are mocking. For example: >>> from mock import Mock >>> mock = Mock() >>> mock.__getitem__ = Mock() >>> mock.__getitem__.>> mock['foo'] 'bar' >>> mock.__getitem__.assert_called_with('foo') Mocking the with statement: mock = Mock() mock.__enter__ = Mock() mock.__exit__ = Mock() mock.__exit__.return_value = False with mock as m: self.assertEqual(m, mock.__enter__.return_value) mock.__enter__.assert_called_with() mock.__exit__.assert_called_with(None, None, None) Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2010-01-03 00:35:50 | | Categories: Python, Projects, Hacking Tags: mock, mocking, testing, magic methods Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2010_01_02.shtml
CC-MAIN-2014-35
refinedweb
4,194
62.78
Space: the final frontier. These are the voyages of the Varnish Project. It's continuing mission: to explore strange new requirements, to seek out new patches and new features, to boldly go where no Varnish user has gone before. Let's go to the cloud and find them dynamic backends. Problem statement Varnish is an HTTP caching reverse proxy, a piece of software traditionally found in your own infrastructure in front of web servers, also located in your own infrastructure. But it's been a long time now since the traditional infrastructure started its move to the cloud: a weatherly term for hosting. The problem with Varnish is to keep track of its backends when they may move with the prevailing winds. And in the cloud, your backends may fly way further than where they'd go if they were tethered in the island in the sun of the old-school infrastructure. Enough with the forecast metaphors, let's sail another ship instead. A big one actually, since its cargo is made of containers. And containers imply the ability to rapidly spin up new applications, surfing on the cloud's elastic properties. Yes, today's backends may move fast, and Varnish won't follow them by itself. Backend limitations The main problem with backends is that they are resolved at compile-time, so when you load a VCL its backends will be hard-coded forever. I often see backends declared using IP addresses, although I'd rather see domain names, but there's a catch. If you declare a backend using a domain name for the address, whatever was resolved will be kept indefinitely as explained above. So Varnish will not honor the TTL of the DNS records. Static backends are your ticket to kissing your cloudy cloud backends goodbye. Worse than that, if you rely on a domain name it can only resolve to at most one IPv4 and one IPv6 address. Even worser than that, we used to have a DNS director in Varnish 3 that would solve this problem for us, and Varnish 3 reached its end of life quite a while ago now. And Varnish Cache Plus 3 will soon be retired. The DNS director Before Varnish 4.0 we could have solved the cloud problem by using the DNS director. It worked (and I use past tense because Varnish 3 is dead) in a very unique way in the sense that it could honor TTL unlike plain backends. But there's a trick to it: backends were created upfront and then enabled/disabled depending on DNS lookups. The syntax is rather unique: director directorname dns { .list = { .host_header = ""; .port = "80"; .connect_timeout = 0.4s; "192.168.15.0"/24; "192.168.16.128"/25; } .ttl = 5m; .suffix = "internal.example.net"; } The list property contains the usual backend properties followed by an ACL, and the VCL compiler creates backends for every single IP address matching theACL. Then there is the TTL so in this case five minutes after a lookup, the next transaction using the director blocks and performs the lookup, and other workers trying to use the same director will also block until the end of the lookup. It worked like a charm, but in some scenarios it is an unrealistic solution. Would you generate backends for all IP addresses in Amazon's us-east-1 if you expect between two and ten backends at all times? The great escape As I see it, Varnish 3 was the feature peak of the project. Starting with Varnish 4.0, features gradually moved away from the core and sometimes spun off a mandatory escape. The concept of the mandatory escape is really simple: Varnish should give you the ability to bend the tool without being locked by the author's imagination. A multitude of entry points is available in Varnish and you can extend it using the shared memory log, the CLI, VCL or modules. My favorite being all of them at the same time. So Varnish 4.0 removed directors from the core and gave birth to the built-in directors VMOD. You could find the usual suspects, except for the client director that was a special case of the hash, and... the DNS director. The reason is simple: we removed directors from the core, provided an escape hatch to implement your own director (and people did!) but we then lost the ability to act during the VCL compilation phase. So no more creating backends upfront, no more DNS director and instead many users clinging to Varnish 3. The ability to implement features in modules and expanding modules capabilities has other benefits. It keeps Varnish itself very lean, and allows more contributors to create value. Varnish 5 may have less features than Varnish 3, but with modules contributed by Varnish Software and other Varnish hackers we can definitely do a lot more with the latest release. The ubiquitous solution The thing is, there's a solution to this problem that would work from version 3.0 to 5.0 (and I'm sure with earlier versions too) but it may require tweaks to deal with VCL differences between versions (mainly Varnish 3 against the rest). The idea is pretty simple: you can generate the backends outside of Varnish, populate a director and load the new VCL. Code generation isn't too hard to do so I made a quick example in Bourne Shell: #!/bin/sh set -e>"$subinit_file" cat "$backend_file" "$subinit_file" rm -f "$backend_file" "$subinit_file" On my machine, it produced the following output: ./backendgen.sh amazon amazon.com >amazon.vcl cat amazon.vcl backend amazon_54_239_17_6 { .host = "54.239.17.6"; } backend amazon_54_239_17_7 { .host = "54.239.17.7"; } backend amazon_54_239_25_192 { .host = "54.239.25.192"; } backend amazon_54_239_25_200 { .host = "54.239.25.200"; } backend amazon_54_239_25_208 { .host = "54.239.25.208"; } backend amazon_54_239_26_128 { .host = "54.239.26.128"; } sub vcl_init { new amazon = directors.round_robin(); amazon.add_backend(amazon_54_239_17_6); amazon.add_backend(amazon_54_239_17_7); amazon.add_backend(amazon_54_239_25_192); amazon.add_backend(amazon_54_239_25_200); amazon.add_backend(amazon_54_239_25_208); amazon.add_backend(amazon_54_239_26_128); } The VCL we load can then be as simple as this: vcl 4.0; import directors; include "amazon.vcl"; sub vcl_recv { set req.backend_hint = amazon.backend(); } Now all you have to do is to periodically reload the VCL, and there you get pseudo-dynamic backends. Obviously this is a very simplistic example, and you can get more information from dig than just a list of IP addresses. This approach works and it is suitable for production use, but if you go down that road you may burn yourself. Watch the temperature We solved the discovery problem by using the dig(1) command and routing is done by a director in Varnish. We can now work with our backends in the cloud and let them come and go in our elastic cluster, and the script will help Varnish keep track of everything. But the periodic VCL reloads come with a problem that we tried to solve with Varnish 4.1. Reloading the active VCL is fairly convenient; it can be as easy as running system varnish reload and if you don't pay attention and discard older VCLs you can accumulate too many of them. The largest number of loaded VCLs I've seen was over 9000 (why actually over 16000) and it must have been a problem frequent enough because in Varnish 4.1 phk introduced VCL temperature. The idea is that VCL needs to be warmed up before use, and by default will cool down after use. Cold VCLs are meant to have a lower footprint (read: not hamper the active VCL) and for that they drop their probes, counters and kindly invite VMODs to also release any resource that could later be acquired again. The bottom line is that reloading VCL implies that you should bite the bullet and come up with an appropriate discard strategy to avoid letting loaded VCLs pile up and eat all your resources. Label rouge We've established that we can generate the backends declaration, for example using a script called periodically. You could instead react to events for cloudy stacks that can notify you of changes. But this kind of solution won't scale too well - yet another caveat. What if you don't have one cluster of backends but several of them? They may need to be refreshed at varying paces, and rollbacks will become more complex because a VCL reload may suddenly relate to a change in the code, or a change in one of the backend cluster definitions. With Varnish 5.0 came the introduction of VCL labels. They were initially just symbolic links you could use to give aliases to your VCLs. But there was a hidden agenda behind the labels and later on it became possible to jump from the active VCL to a label. If you run Varnish in front of several unrelated domains, with the use of labels you can now have separate cache policies that don't risk leaking logic between each other (although there are other things than logic that can leak) and each label can have a life of its own and be reloaded independently. The syntax is simple: sub vcl_recv { if (host ~ "amazon.com") { return (vcl(amazon)); } if (host ~ "acme.com") { return (vcl(acme)); } } So now I can generate and refresh VCL independently for my amazon and acme clusters as long as I load them using their respective labels. Chances are that the web applications behind need different caching policy and in a multi-tenant Varnish installation it will make things a lot easier (spoiler alert: isolated caching policies would still share the same cache infrastructure). Back to the topic at hand So far, the title of this post has been misleading as we haven't touched on dynamic backends at all. So let's remedy that and simply say that it was introduced in Varnish 4.1, or at least made possible with this release. The backend (VBE) and director (VDI) subsystems sustained heavy changes between Varnish 3.0 and 4.0, breaking everyone's VCL that made use of directors. Well, breaking most VCL even without directors. But while VCL remained stable between 4.0 and 4.1, there were once again many changes in those subsystems. The backend (VBE) and director (VDI) subsystems sustained heavy changes between Varnish 4.0 and 4.1 although users don't see any signs of it in their VCL code. Dynamic backends obviously, but also custom backend transports and the impacts of VCL temperature. Varnish Cache doesn't ship with built-in dynamic backends, but Varnish Plus features two VMODs addressing that need: a drop-in replacement of the DNS director like the one in Varnish 3, and on-demand backends with one called goto. The two hard problems I won't be talking much more about vmod-goto, but will focus on the DNS director equivalent. Conceptually, it's a DNS cache, populated after lookups and evicted based on a TTL. Although cache invalidation is one of the two hard problems in computer science, invalidation wasn't hard to solve considering it's a rewrite of the DNS director: do the same invalidation again. The only difficulty was dealing with Varnish internals and figuring out how to evolve them until at some point they stabilized while 4.1 was still under development. No! The real hard problem for this module was to name it. There was already a DNS module so I had to come up with something else and I eventually named it named, after the N in DNS and so that it would read nicely in VCL: new amazon = named.director(); It took the development cycle from 4.0 to 4.1, and two point releases to get to a usable state. DNS director improved The DNS director in Varnish 3 had an ACL-like notation used to create all the possible backends ahead of time. And such backends couldn't benefit from probe support. vmod-named however relies on dynamic backends that are created just in time, an actual ACL can be used as a white-list and probes work. Another difference with the DNS director is that lookups used to happen during transactions once the TTL had expired, blocking all transactions bound to the director. With vmod-named lookups happen asynchronously, and never slow down requests. Finally, an anecdotal novelty (I would have missed it if it weren't for Geoff, thanks!) of Varnish 4.1 greatly simplified the director's creation. VMOD can have optional named (this is going meta) parameters and with a constructor consisting of many parameters: being able to use names instead of parameters ordering is bliss: new my_dir = named.director( port = "80", probe = my_probe, whitelist = my_acl, ttl = 5m); Known limitations The DNS support is very limited, and only A and AAAA records are partially supported. Using the system's resolver means losing a lot of information from the lookup results. You have to specify a TTL because only the IP addresses are part of the results. And two distinct backends will be created if a machine is bound to both IPv4 and IPv6 addresses. Also DNS is a passive system (I'm not referring to passive DNS), it doesn't notify you of name changes. So it's your job to figure an appropriate TTL depending on your elastic expectations. And of course the biggest limitation is that you need a name server, otherwise there's no point in using DNS. If you've used the DNS director, vmod-named's limitations are nothing new. Going further DNS is one of them obscure protocols on the internet. It is highly documented and yet often overlooked, and it's not rare to find hard-coded IP addresses in configuration files (that includes Varnish). However after searching a bit it turns out to be available in a couple places. Check your cloud provider's documentation, or your favorite container orchestrator. They will likely provide some kind of DNS support. If you're stuck on Varnish 3 because of the DNS director, you can now upgrade to Varnish Plus 4.1 and replace it with vmod-named. For other dynamic backend needs, for instance the use of DNS SRV records, or a discovery system not based on DNS, you will find that vmod-goto meets (or will meet) your needs. Contact us if you want to learn more about dynamic backends. More information about vmod-goto will follow soon. To learn more about Dynamic backends and other new features in Varnish Plus, please register for our upcoming webinar.
https://info.varnish-software.com/blog/varnish-backends-in-the-cloud
CC-MAIN-2021-10
refinedweb
2,434
63.59
Hello everyone, The time flies, we’ve survived another winter and meeting the new spring. And we’re getting closer and closer to the next RubyMine release. Today we’re glad to announce the next chunk of Hinoki features. Please welcome build 135.433 of the Early Access Program. Better Slim support With this version the embedded code types are supported in Slim templates: Supporting json.jbuilder views Syntax highlighting and Ruby code insight are now available for json.jbuilder files: And, finally, we fixed the very annoying bug: gems installed in local BUNDLER_PATH could now be found. Please visit release notes page for more details and screenshots and download the EAP build to try it out. Have a nice spring holidays! And we’re looking forward for your feedback as usual. — Develop with pleasure! JetBrains RubyMine Team Hi, What about multi-selection mode like in PHPStorm EAP, do you plan to add this feature in next major release? Yes, you can even try it in this EAP actually. More details are coming soon, please stay tuned. Bug: Rubymine doesn’t like “%table#workorders{bootstrap_datatables_attributes, :data => {:source => work_orders_url(:format => “json”)}}” in a haml file, where bootstrap_datatables_attributes is a ruby function. It works, but Rubymine underlines it in red. def bootstrap_datatables_attributes {:class => ‘display datatable table table-striped table-bordered’, :cellpadding => ’0′, :cellspacing =>’0′, :border => ’0′} end Thank you very much for reporting! This is a known issue, please vote for. What about slim 2? E.g. output with leading/trailing spaces =><- It is a known problem: We are working on it.
http://blog.jetbrains.com/ruby/2014/03/rubymine-hinoki-eap-jbuilder-views-better-slim-support/
CC-MAIN-2015-48
refinedweb
260
68.16
SENDMAILTM INSTALLATION AND OPERATION GUIDE Eric Allman Claus Assmann Gregory Neil Shapiro Proofpoint, Inc. Version 8.759 For Sendmail Version 8.14 SendmailTM implements a general purpose internetwork mail routing facility under the UNIX(R) sup- plied. Most other configurations can be built by adjusting an existing configuration file incrementally. Sendmail is based on RFC 821 (Simple Mail Transport Protocol), RFC 822 (Internet Mail Headers Format), RFC 974 (MX routing), RFC 1123 (Internet Host Requirements), RFC 1413 (Identification server), RFC 1652 (SMTP 8BITMIME Exten- sion), RFC 1869 (SMTP Service Extensions), RFC 1870 (SMTP SIZE Extension), RFC 1891 (SMTP Delivery Status Notifica- tions), RFC 1892 (Multipart/Report), RFC 1893 (Enhanced Mail System Status Codes), RFC 1894 (Delivery Status Notifica- tions), RFC 1985 (SMTP Service Extension for Remote Message Queue Starting), RFC 2033 (Local Message Transmission Proto- col), RFC 2034 (SMTP Service Extension for Returning ____________________ DISCLAIMER: This documentation is under modification. Sendmail is a trademark of Proofpoint, Inc. US Patent Numbers 6865671, 6986037. SMM:08-2 Sendmail Installation and Operation Guide Sendmail Installation and Operation Guide SMM:08-3 Enhanced Error Codes), RFC 2045 (MIME), RFC 2476 (Message Submission), RFC 2487 (SMTP Service Extension for Secure SMTP over TLS), RFC 2554 (SMTP Service Extension for Authen- tication), RFC 2821 (Simple Mail Transfer Protocol), RFC 2822 (Internet Message Format), RFC 2852 (Deliver By SMTP Service Extension), and RFC 29 cir- cumstances. These features are described. Section one describes how to do a basic sendmail installation. Section two explains the day-to-day informa- tion confi- guration that can be done at compile time. The appendixes give a brief but detailed explanation of a number of features not described in the rest of the paper. Sendmail Installation and Operation Guide SMM:08-7 1. BASIC INSTALLATION There. Assuming you have the standard sendmail distri- bution, see cf/README for further information. The remainder of this section will describe the installation of sendmail assuming you can use one of the existing configurations and that the standard installa- tion parameters are acceptable. All pathnames and exam- ples are given from the root of the sendmail subtree, normally /usr/src/usr.sbin/sendmail on 4.4BSD-based sys- tems. Continue with the next section if you need/want to compile sendmail yourself. If you have a running binary already on your system, you should probably skip to sec- tion 1.2. 1.1. Compiling Sendmail All sendmail source is in the sendmail subdirec- tory. To compile sendmail, "cd" into the sendmail directory and type ./Build This will leave the binary in an appropriately named subdirectory, e.g., obj.BSD-OS.2.1.i386. It works for multiple object versions compiled out of the same directory. 1.1.1. Tweaking the Build Invocation You can give parameters on the Build command. In most cases these are only used when the obj.* directory is first created. To restart from scratch, use -c. These commands include: -L libdirs A list of directories to search for libraries. -I incdirs A list of directories to search for include SMM:08-8 Sendmail Installation and Operation Guide files. -E envar=value Set an environment variable to an indicated value before compiling. -c Create a new obj.* tree before running. -f siteconfig Read the indicated site configuration file. If this parameter is not specified, Build includes all of the files $BUILDTOOLS/Site/site.$oscf.m4 and $BUILDTOOLS/Site/site.config.m4, where $BUILD- TOOLS is normally ../devtools and $oscf is the same name as used on the obj.* directory. See below for a description of the site configura- tion file. -S Skip auto-configuration. Build will avoid auto-detecting libraries if this is set. All libraries and map definitions must be speci- fied in the site configuration file. Most other parameters are passed to the make pro- gram; for details see $BUILDTOOLS/README. 1.1.2. Creating a Site Configuration File (This section is not yet complete. For now, see the file devtools/README for details.) See sendmail/README for various compilation flags that can be set. 1.1.3. Tweaking the Makefile Sendmail supports two different formats for the local (on disk) version of databases, notably the aliases database. At least one of these should be defined if at all possible. NDBM The ``new DBM'' format, available on nearly all systems around today. This was the preferred format prior to 4.4BSD. It allows such complex things as multiple databases and closing a currently open database. NEWDB The Berkeley DB package. If you have this, use it. It allows long records, multiple open databases, real in-memory caching, and so forth. You can define this in conjunction with NDBM; if you do, Sendmail Installation and Operation Guide SMM:08-9 old alias databases are read, but when a new database is created it will be in NEWDB format. As a nasty hack, if you have NEWDB, NDBM, and NIS defined, and if the alias file name includes the sub- string "/yp/", sendmail will create both new and old versions of the alias file during a newalias command. This is required because the Sun NIS/YP system reads the DBM version of the alias file. It's ugly as sin, but it works. If neither of these are defined, sendmail reads the alias file into memory on every invocation. This can be slow and should be avoided. There are also several methods for remote database access: LDAP Lightweight Directory Access Protocol. NIS Sun's Network Information Services (form- erly YP). NISPLUS Sun's NIS+ services. NETINFO NeXT's NetInfo service. HESIOD Hesiod service (from Athena). Other compilation flags are set in conf.h and should be predefined for you unless you are porting to a new environment. For more options see sendmail/README. 1.1.4. Compilation and installation After making the local system configuration described above, You should be able to compile and install the system. The script "Build" is the best approach on most systems: ./Build This will use uname(1) to create a custom Makefile for your environment. If you are installing in the standard places, you should be able to install using ./Build install This should install the binary in /usr/sbin and create links from /usr/bin/newaliases and /usr/bin/mailq to /usr/sbin/sendmail. On most SMM:08-10 Sendmail Installation and Operation Guide systems it will also format and install man pages. Notice: as of version 8.12 sendmail will no longer be installed set-user-ID root by default. If you really want to use the old method, you can specify it as target: ./Build install-set-user-id 1.2. Configuration Files configura- tion reflects that. The distribution includes an m4- based configuration package that hides a lot of the complexity. See cf/README for details. Our configuration files are processed by m4 to facilitate local customization; the directory cf of the sendmail distribution directory contains the source files. This directory contains several sub- directories: cf Both site-dependent and site-independent descriptions of hosts. These can be literal host names (e.g., "ucbvax.mc") when the hosts are gateways or more general descrip- tions (such as "generic-solaris2.mc" as a general description of an SMTP-connected host running Solaris 2.x. Files ending .mc (``M4 Configuration'') are the input descriptions; the output is in the corresponding .cf file. The general struc- ture of these files is described below. domain Site-dependent subdomain descriptions. These are tied to the way your organization wants to do addressing. For example, domain/CS.Berkeley.EDU.m4 is our description for hosts in the CS.Berkeley.EDU subdomain. These are referenced using the DOMAIN m4 macro in the .mc file. feature Definitions of specific features that some particular host in your site might want. These are referenced using the FEATURE m4 Sendmail Installation and Operation Guide SMM:08-11 macro. An example feature is use_cw_file (which tells sendmail to read an /etc/mail/local-host-names file on startup to find the set of local names). hack Local hacks, referenced using the HACK m4 macro. Try to avoid these. The point of hav- ing them here is to make it clear that they smell. m4 Site-independent m4(1) include files that have information common to all configuration files. This can be thought of as a "#include" directory. mailer Definitions of mailers, referenced using the MAILER m4 macro. The mailer types that are known in this distribution are fax, local, smtp, uucp, and usenet. For example, to include support for the UUCP-based mailers, use "MAILER(uucp)". ostype Definitions describing various operating system environments (such as the location of support files). These are referenced using the OSTYPE m4 macro. sh Shell files used by the m4 build process. You shouldn't have to mess with these. siteconfig Local UUCP connectivity information. This directory has been supplanted by the mailertable feature; any new configurations should use that feature to do UUCP (and other) routing. The use of this directory is deprecated. SMM:08-12 Sendmail Installation and Operation Guide. 1.3. Details of Installation Files This subsection describes the files that comprise the sendmail installation. 1.3.1. /usr/sbin/sendmail The binary for sendmail is located in /usr/sbin[1]. It should be set-group-ID smmsp as described in sendmail/SECURITY. For security rea- sons, /, /usr, and /usr/sbin should be owned by root, mode 0755[2]. 1.3.2. /etc/mail/sendmail.cf This is the main configuration file for send- mail[3]. This is one of the two non-library file names compiled into sendmail[4], the other is /etc/mail/submit.cf. The configuration file is normally created using the distribution files described above. If you have a particularly unusual system configura- tion you may need to create a special version. The ____________________ [1]This is usually /usr/sbin on 4.4BSD and newer systems; many systems install it in /usr/lib. I understand it is in /usr/ucblib on System V Release 4. [2]Some vendors ship them owned by bin; this creates a security hole that is not actually related to sendmail. Oth- er important directories that should have restrictive owner- ships and permissions are /bin, /usr/bin, /etc, /etc/mail, /usr/etc, /lib, and /usr/lib. [3]Actually, the pathname varies depending on the operat- ing system; /etc/mail is the preferred directory. Some older systems install it in /usr/lib/sendmail.cf, and I've also seen it in /usr/ucblib. If you want to move this file, add -D_PATH_SENDMAILCF=\"/file/name\" to the flags passed to the C compiler. Moving this file is not recommended: other pro- grams and scripts know of this location. [4]The system libraries can reference other files; in particular, system library subroutines that sendmail calls probably reference /etc/passwd and /etc/resolv.conf. Sendmail Installation and Operation Guide SMM:08-13 format of this file is detailed in later sections of this document. 1.3.3. /etc/mail/submit.cf This is the configuration file for sendmail when it is used for initial mail submission, in which case it is also called ``Mail Submission Pro- gram'' (MSP) in contrast to ``Mail Transfer Agent'' (MTA). Starting with version 8.12, sendmail uses one of two different configuration files based on its operation mode (or the new -A option). For ini- tial mail submission, i.e., if one of the options -bm (default), -bs, or -t is specified, submit.cf is used (if available), for other operations sendmail.cf is used. Details can be found in sendmail/SECURITY. submit.cf is shipped with send- mail (in cf/cf/) and is installed by default. If changes to the configuration need to be made, start with cf/cf/submit.mc and follow the instruction in cf/README. prevent.7. /var/spool/mqueue The directory /var/spool/mqueue should be created to hold the mail queue. This directory SMM:08-14 Sendmail Installation and Operation Guide should be mode 0700 and owned by root. The actual path of this directory is defined by the QueueDirectory option of the sendmail.cf file. To use multiple queues, supply a value ending with an asterisk. For example, /var/spool/mqueue/qd* will use all of the direc- tories or symbolic links to directories beginning with `qd' in /var/spool/mqueue as queue direc- tories.. If shared memory support is compiled in, send- mail stores the available diskspace in a shared memory segment to make the values readily available to all children without incurring system overhead. In this case, only the daemon updates the data; i.e., the sendmail daemon creates the shared memory segment and deletes it if it is terminated. To use this, sendmail must have been compiled with support for shared memory (-DSM_CONF_SHM) and the option SharedMemoryKey must be set. Notice: do not use the same key for sendmail invocations with different queue directories or different queue group declara- tions. Access to shared memory is not controlled by locks, i.e., there is a race condition when data in the shared memory is updated. However, since opera- tion of sendmail does not rely on the data in the shared memory, this does not negatively influence the behavior. 1.3.8. /var/spool/clientmqueue The directory /var/spool/clientmqueue should be created to hold the mail queue. This directory should be mode 0770 and owned by user smmsp, group smmsp. The actual path of this directory is defined by the QueueDirectory option of the submit.cf file. 1.3.9. /var/spool/mqueue/.hoststat This is a typical value for the HostStatus- Directory option, containing one file per host that Sendmail Installation and Operation Guide SMM:08-15 this sendmail has chatted with recently. It is nor- mally a subdirectory of mqueue. 1.3.10. /etc/mail/aliases* The system aliases are held in "/etc/mail/aliases". A sample is given in "sendmail/aliases" which includes some aliases which must be defined: cp sendmail/aliases /etc/mail/aliases edit /etc/mail/aliases You should extend this file with any aliases that are apropos to your system. Normally sendmail looks at a database version of the files, stored either in "/etc/mail/aliases.dir" and "/etc/mail/aliases.pag" or "/etc/mail/aliases.db" depending on which data- base package you are using. The actual path of this file is defined in the AliasFile option of the sendmail.cf file. The permissions of the alias file and the database versions should be 0640 to prevent local denial of service attacks as explained in the top level README in the sendmail distribution. If the permissions 0640 are used, be sure that only trusted users belong to the group assigned to those files. Otherwise, files should not even be group readable. 1.3.11. /etc/rc or /etc/init.d/sendmail It will be necessary to start up the sendmail daemon when your system reboots. This daemon per- forms two functions: it listens on the SMTP socket for connections (to receive mail from a remote sys- tem) and it processes the queue periodically to insure that mail gets delivered when hosts come up. If necessary,": SMM:08-16 Sendmail Installation and Operation Guide. Some people use a more complex startup script, removing zero length qf/hf/Qf files and df files for which there is no qf/hf/Qf file. Note this is not advisable. For example, see Figure 1 for an example of a complex script which does this clean up. 1.3.12. /etc/mail/helpfile This is the help file used by the SMTP HELP command. It should be copied from "sendmail/helpfile": cp sendmail/helpfile /etc/mail/helpfile The actual path of this file is defined in the HelpFile option of the sendmail.cf file. 1.3.13. /etc/mail/statistics If you wish to collect statistics about your mail traffic, you should create the file "/etc/mail/statistics": cp /dev/null /etc/mail/statistics chmod 0600 /etc/mail/statistics This file does not grow. It is printed with the program "mailstats/mailstats.c." The actual path of this file is defined in the S option of the sendmail.cf file. 1.3.14. /usr/bin/mailq If sendmail is invoked as "mailq," it will simulate the -bp flag (i.e., sendmail will print the contents of the mail queue; see below). This should be a link to /usr/sbin/sendmail. Sendmail Installation and Operation Guide SMM:08-17 ____________________________________________________________ #!/bin/sh # remove zero length qf/hf/Qf files for qffile in qf* hf* Qf* do if [ -r $qffile ] then if [ ! -s $qffile ] then echo -n " <zero: $qffile>" > /dev/console rm -f $qffile fi fi done # rename tf files to be qf if the qf does not exist for tffile in tf* do qffile=`echo $tffile | sed 's/t/q/'` if [ -r $tffile -a ! -f $qffile ] then echo -n " <recovering: $tffile>" > /dev/console mv $tffile $qffile else if [ -f $tffile ] then echo -n " <extra: $tffile>" > /dev/console rm -f $tffile fi fi done # remove df files with no corresponding qf/hf/Qf files for dffile in df* do qffile=`echo $dffile | sed 's/d/q/'` hffile=`echo $dffile | sed 's/d/h/'` Qffile=`echo $dffile | sed 's/d/Q/'` if [ -r $dffile -a ! -f $qffile -a ! -f $hffile -a ! -f $Qffile ] then echo -n " <incomplete: $dffile>" > /dev/console mv $dffile `echo $dffile | sed 's/d/D/'` fi done # announce files that have been saved during disaster recovery for xffile in [A-Z]f* do if [ -f $xffile ] then echo -n " <panic: $xffile>" > /dev/console fi done SMM:08-18 Sendmail Installation and Operation Guide Figure 1 - A complex startup script ____________________________________________________________ 1.3.15. sendmail.pid sendmail stores its current pid in the file specified by the PidFile option (default is _PATH_SENDMAILPID). sendmail uses TempFileMode (which defaults to 0600) as the permissions of that file to prevent local denial of service attacks as explained in the top level README in the sendmail distribution. If the file already exists, then it might be necessary to change the permissions accordingly, e.g., chmod 0600 /var/run/sendmail.pid Note that as of version 8.13, this file is unlinked when sendmail exits. As a result of this change, a script such as the following, which may have worked prior to 8.13, will no longer work: # stop & start sendmail PIDFILE=/var/run/sendmail.pid kill `head -1 $PIDFILE` `tail -1 $PIDFILE` because it assumes that the pidfile will still exist even after killing the process to which it refers. Below is a script which will work correctly on both newer and older versions: # stop & start sendmail PIDFILE=/var/run/sendmail.pid pid=`head -1 $PIDFILE` cmd=`tail -1 $PIDFILE` kill $pid $cmd This is just an example script, it does not perform any error checks, e.g., whether the pidfile exists at all. 1.3.16. Map Files To prevent local denial of service attacks as explained in the top level README in the sendmail distribution, the permissions of map files created by makemap should be 0640. The use of 0640 implies that only trusted users belong to the group assigned to those files. If those files already exist, then it might be necessary to change the Sendmail Installation and Operation Guide SMM:08-19 permissions accordingly, e.g., cd /etc/mail chmod 0640 *.db *.pag *.dir 2. NORMAL OPERATIONS 2.1. The System Log The system log is supported by the syslogd(8) program. All messages from sendmail are logged under the LOG_MAIL facility[5]. 2.1.1. Format: from The envelope sender address. size The size of the message in bytes. class The class (i.e., numeric precedence) of the message. pri The initial message priority (used for queue sorting). nrcpts The number of envelope recipients for this message (after aliasing and forward- ing). msgid The message id of the message (from the header). ____________________ [5]Except on Ultrix, which does not support facilities in the syslog. [6]This format may vary slightly if your vendor has changed the syntax. SMM:08-20 Sendmail Installation and Operation Guide bodytype The message body type (7BIT or 8BITMIME), as determined from the envelope. proto The protocol used to receive this message (e.g., ESMTP or UUCP) daemon The daemon name from the DaemonPortOp- tions setting. relay The machine from which it was received. There is also one line logged per delivery attempt (so there can be several per message if delivery is deferred or there are multiple recipients). Fields are: to A comma-separated list of the recipients to this mailer. ctladdr The ``controlling user'', that is, the name of the user whose credentials we use for delivery. delay The total delay between the time this message was received and the current delivery attempt. xdelay The amount of time needed in this delivery attempt (normally indicative of the speed of the connection). mailer The name of the mailer used to deliver to this recipient. relay The name of the host that actually accepted (or rejected) this recipient. dsn The enhanced error code (RFC 2034) if available. stat The delivery status. Not all fields are present in all messages; for example, the relay is usually not listed for local deliveries. 2.1.2. Levels If you have syslogd(8) or an equivalent installed, you will be able to do logging. There is a large amount of information that can be logged. The log is arranged as a succession of levels. At the lowest level only extremely strange situations Sendmail Installation and Operation Guide SMM:08-21 ``Log Level''. 2.2. Dumping State You can ask sendmail to log a dump of the open files and the connection cache by sending it a SIGUSR1 signal. The results are logged at LOG_DEBUG priority. 2.3. The Mail Queues Mail messages may either be delivered immediately or be held for later delivery. Held messages are placed into a holding directory called a mail queue. A mail message may be queued for these reasons: + If a mail message is temporarily undeliverable, it is queued and delivery is attempted later. If the message is addressed to multiple recipients, it is queued only for those recipients to whom delivery is not immediately possible. + If the SuperSafe option is set to true, all mail messages are queued while delivery is attempted. + If the DeliveryMode option is set to queue-only or defer, all mail is queued, and no immediate delivery is attempted. + If the load average becomes higher than the value of the QueueLA option and the QueueFactor (q) option divided by the difference in the current load average and the QueueLA option plus one is less than the priority of the message, messages are queued rather than immediately delivered. + One or more addresses are marked as expensive and delivery is postponed until the next queue run or one or more address are marked as held via mailer which uses the hold mailer flag. + The mail message has been marked as quarantined via a mail filter or rulesets. 2.3.1. Queue Groups and Queue Directories There are one or more mail queues. Each mail queue belongs to a queue group. There is always a default queue group that is called ``mqueue'' SMM:08-22 Sendmail Installation and Operation Guide (which is where messages go by default unless oth- erwise specified). The directory or directories which comprise the default queue group are speci- fied by the QueueDirectory option. (max- imum Runs send Sendmail Installation and Operation Guide SMM:08-23. In general this should be smoothed out due to the distribution of those slow jobs, however, for sites with small number of queue entries this might introduce noti- cable delays. In general, persistent queue runners are only useful for sites with big queues. 2.3.3. Manual Intervention Under normal conditions the mail queue will be processed transparently. However, you may find that manual intervention is sometimes necessary. For example, if a major host is down for a period of time the queue may become clogged. Although send- mail ought to recover gracefully when the host comes up, you may find performance unacceptably bad in the meantime. In that case you want to check the content of the queue and manipulate it as explained in the next two sections. 2.3.4. Printing the queue The contents of the queue(s) can be printed using the mailq command (or by specifying the -bp flag to sendmail): mailq This will produce a listing of the queue id's, the size of the message, the date the message entered the queue, and the sender and recipients. If shared memory support is compiled in, the flag -bP can be used to print the number of entries in the queue(s), provided a process updates the data. How- ever, as explained earlier, the output might be slightly wrong, since access to the shared memory is not locked. For example, ``unknown number of entries'' might be shown. The internal counters are SMM:08-24 Sendmail Installation and Operation Guide updated after each queue run to the correct value again. 2.3.5. Forcing the queue Sendmail should run the queue automatically at intervals. When using multiple queues, a separate process will by default be created to run each of the queues unless the queue run is initiated by a user with the verbose flag. pro- cess accumu- late many processes in your system. Unfortunately, there is no completely general way to solve this. In some cases, you may find that a major host going down for a couple of days may create a prohi- bitively: cd /var/spool mv mqueue omqueue; mkdir mqueue; chmod 0700 mqueue You should then kill the existing daemon (since it will still be processing in the old queue direc- tory) and create a new daemon. To run the old mail queue, issue the following command: /usr/sbin/sendmail -C /etc/mail/queue.cf -q The -C flag specifies an alternate configuration Sendmail Installation and Operation Guide SMM:08-25 file queue.cf which should refer to the moved queue directory O QueueDirectory=/var/spool/omqueue and the -q flag says to just run every job in the queue. You can also specify the moved queue direc- tory on the command line /usr/sbin/sendmail -oQ/var/spool/omqueue -q but this requires that you do not have queue groups in the configuration file, because those are not subdirectories of the moved directory. See the sec- tion about ``Queue Group Declaration'' for details; you most likely need a different configuration file to correctly deal with this problem. However, a proper configuration of queue groups should avoid filling up queue directories, so you shouldn't run into this problem. If you have a tendency toward voyeurism, you can use the -v flag to watch what is going on. When the queue is finally emptied, you can remove the directory: rmdir /var/spool/omqueue 2.3.6. Quarantined Queue Items It is possible to "quarantine" mail messages, otherwise known as envelopes. Envelopes (queue files) are stored but not considered for delivery or display unless the "quarantine" state of the envelope is undone or delivery or display of quarantined items is requested. Quarantined mes- sages are tagged by using a different name for the queue file, 'hf' instead of 'qf', and by adding the quarantine reason to the queue file. Delivery or display of quarantined items can be requested using the -qQ flag to sendmail or mailq. Additionally, messages already in the queue can be quarantined or unquarantined using the new -Q flag to sendmail. For example, sendmail -Qreason -q[!][I|R|S][matchstring] Quarantines the normal queue items matching the criteria specified by the -q[!][I|R|S][matchstring] using the reason given on the -Q flag. Likewise, SMM:08-26 Sendmail Installation and Operation Guide sendmail -qQ -Q[reason] -q[!][I|R|S|Q][matchstring] Change the quarantine reason for the quarantined items matching the criteria specified by the - q[!][I|R|S|Q][matchstring] using the reason given on the -Q flag. If there is no reason, unquarantine the matching items and make them nor- mal queue items. Note that the -qQ flag tells send- mail to operate on quarantined items instead of normal items. 2.4. Disk Based Connection Information Sendmail stores a large amount of information about each remote system it has connected to in memory. It is possible to preserve some of this infor- mation on disk as well, by using the HostStatusDirec- tory option, so that it may be shared between several invocations of sendmail. This allows mail to be queued immediately or skipped during a queue run if there has been a recent failure in connecting to a remote machine. Note: information about a remote system is stored in a file whose pathname consists of the com- ponents of the hostname in reverse order. For example, the information for host.example.com is stored in com./example./host. For top-level domains like com this can create a large number of subdirectories which on some filesystems can exhaust some limits. Moreover, the performance of lookups in directory with thousands of entries can be fairly slow depending on the filesystem implementation. Additionally enabling SingleThreadDelivery has the added effect of single-threading mail delivery to a destination. This can be quite helpful if the remote machine is running an SMTP server that is easily over- loaded- mail is talking to the same host will be tried again quickly rather than being delayed for a long time. The disk based host information is stored in a subdirectory of the mqueue directory called Sendmail Installation and Operation Guide SMM:08-27 .hoststat[7]. Removing this directory and its sub- directories has an effect similar to the purgestat command and is completely safe. However, purgestat only removes expired (Timeout.hoststatus) data. expired at any time with the purgestat command or by invoking sendmail with the -bH switch. The connection information may be viewed with the hoststat command or by invoking sendmail with the -bh switch. 2.5. The Service Switch The implementation of certain system services such as host and user name lookup is controlled by the service switch. If the host operating system supports such a switch, and sendmail knows about it, sendmail will use the native version. Ultrix, Solaris, and DEC OSF/1 are examples of such systems[8]. If the underlying operating system does not sup- port: hosts dns files nis aliases files nis will ask sendmail to look for hosts in the Domain Name System first. If the requested host name is not found, ____________________ [7]This is the usual value of the HostStatusDirectory op- tion; it can, of course, go anywhere you like in your filesystem. [8]HP-UX 10 has service switch support, but since the APIs are apparently not available in the libraries sendmail does not use the native service switch in this release. SMM:08-28 Sendmail Installation and Operation Guide it tries local files, and if that fails it tries NIS. Similarly, when looking for aliases it will try the local files first followed by NIS.. Service switches are not completely integrated. For example, despite the fact that the host entry listed in the above example specifies to look in NIS, on SunOS this won't happen because the system imple- mentation of gethostbyname(3) doesn't understand this. 2.6. The Alias Database After recipient addresses are read from the SMTP connection or command line they are parsed by ruleset 0, which must resolve to a {mailer, host, address} triple. If the flags selected by the mailer include the A (aliasable) flag, the address part of the triple is looked up as the key (i.e., the left hand side) in the alias database. If there is a match, the address is deleted from the send queue and all addresses on the right hand side of the alias are added in place of the alias that was found. This is a recursive opera- tion, so aliases found in the right hand side of the alias are similarly expanded. The alias database exists in two forms. One is a text form, maintained in the file /etc/mail/aliases. The aliases are of the form name: name1, name2, ... Only local names may be aliased; e.g., eric@prep.ai.MIT.EDU: eric@CS.Berkeley.EDU will not have the desired effect (except on prep.ai.MIT.EDU, and they probably don't want me)[9]. Aliases may be continued by starting any continuation ____________________ [9]Actually, any mailer that has the `A' mailer flag set will permit aliasing; this is normally limited to the local mailer. Sendmail Installation and Operation Guide SMM:08-29 lines with a space or a tab or by putting a backslash directly before the newline. Blank lines and lines beginning with a sharp sign ("#") are comments. The second form is processed by the ndbm(3)[10] or the Berkeley DB library. This form is in the file /etc/mail/aliases.db (if using NEWDB) or /etc/mail/aliases.dir and /etc/mail/aliases.pag (if using NDBM). This is the form that sendmail actually uses to resolve aliases. This technique is used to improve performance. The control of search order is actually set by the service switch. Essentially, the entry O AliasFile=switch:aliases is always added as the first alias entry; also, the first alias file name without a class (e.g., without "nis:" on the front) will be used as the name of the file for a ``files'' entry in the aliases switch. For example, if the configuration file contains O AliasFile=/etc/mail/aliases and the service switch contains aliases nis files nisplus then aliases will first be searched in the NIS data- base, then in /etc/mail/aliases, then in the NIS+ database. You can also use NIS-based alias files. For exam- ple, the specification: O AliasFile=/etc/mail/aliases O AliasFile=nis:mail.aliases@my.nis.domain will first search the /etc/mail/aliases file and then the map named "mail.aliases" in "my.nis.domain". Warn- ing: if you build your own NIS-based alias files, be sure to provide the -l flag to makedbm(8) to map upper case letters in the keys to lower case; otherwise, aliases with upper case letters in their names won't match incoming addresses. Additional flags can be added after the colon exactly like a K line - for example: ____________________ [10]The gdbm package does not work. SMM:08-30 Sendmail Installation and Operation Guide O AliasFile=nis:-N mail.aliases@my.nis.domain will search the appropriate NIS map and always include null bytes in the key. Also: O AliasFile=nis:-f mail.aliases@my.nis.domain will prevent sendmail from downcasing the key before the alias lookup. 2.6.1. Rebuilding the alias database The hash or dbm version of the database may be rebuilt explicitly by executing the command newaliases This is equivalent to giving sendmail the -bi flag: /usr/sbin/sendmail -bi If you have multiple aliases databases speci- fied, the -bi flag rebuilds all the database types it understands (for example, it can rebuild NDBM databases but not NIS databases). 2.6.2. Potential problems pro- cess rebuilding the database dies (due to being killed or a system crash) before completing the rebuild. Sendmail has three techniques to try to relieve these problems. First, it ignores inter- rupts @: @ (which is not normally legal). Before sendmail will access the database, it checks to insure that this Sendmail Installation and Operation Guide SMM:08-31 entry exists[11]. 2.6.3. List owners: unix-wizards: eric@ucbarpa, wnj@monet, nosuchuser, sam@matisse owner-unix-wizards: unix-wizards-request unix-wizards-request: eric@ucbarpa would cause "eric@ucbarpa" to get the error that will occur when someone sends to unix-wizards due to the inclusion of "nosuchuser" on the list. List owners also cause the envelope sender address to be modified. The contents of the owner alias are used if they point to a single user, oth- erwise the name of the alias itself is used. For this reason, and to obey Internet conventions, the "owner-" address normally points at the "-request" address; this causes messages to go out with the typical Internet convention of using ``list- request'' as the return address. 2.7. User Information Database This option is deprecated, use virtusertable and genericstable instead as explained in cf/README. If you have a version of sendmail with the user informa- tion database compiled in, and you have specified one or more databases using the U option, the databases will be searched for a user:maildrop entry. If found, the mail will be sent to the specified address. 2.8. Per-User Forwarding (.forward Files) As an alternative to the alias database, any user may put a file with the name ".forward" in his or her home directory. If this file exists, sendmail redirects mail for that user to the list of addresses ____________________ [11]The AliasWait option is required in the configuration for this action to occur. This should normally be specified. SMM:08-32 Sendmail Installation and Operation Guide listed in the .forward file. Note that aliases are fully expanded before forward files are referenced. For example, if the home directory for user "mckusick" has a .forward file with contents: mckusick@ernie kirk@calder then any mail arriving for "mckusick" will be redirected to the specified accounts.. 2.9. Special Header Lines Several header lines have special interpretations defined by the configuration file. Others have interpretations built into sendmail that cannot be changed without changing the code. These built-ins are described here. 2.9.1. Errors-To: If errors occur anywhere during processing, this header will cause error messages to go to the listed addresses. This is intended for mailing lists. The Errors-To: header was created in the bad old days when UUCP didn't understand the distinc- tion. 2.9.2. Apparently-To: pos- sible actions is to add an "Apparently-To:" header line for any recipients it is aware of. Sendmail Installation and Operation Guide SMM:08-33 The Apparently-To: header is non-standard and is both deprecated and strongly discouraged. 2.9.3. Precedence The Precedence: header can be used as a crude control of message priority. It tweaks the sort order in the queue and can be configured to change the message timeout values. The precedence of a message also controls how delivery status notifica- tions (DSNs) are processed for that message. 2.10. IDENT Protocol Support Sendmail supports the IDENT protocol as defined in RFC 1413. Note that the RFC states a client should wait at least 30 seconds for a response. The default Timeout.ident is 5 seconds as many sites have adopted the practice of dropping IDENT queries. This has lead to delays processing mail. Although this enhances identification of the author of an email message by doing a ``call back'' to the originating system to include the owner of a particular TCP connection in the audit trail it is in no sense perfect; a deter- mined forger can easily spoof the IDENT protocol. The following description is excerpted from RFC 1413: 6. Security Considerations The information returned by this protocol is at most as trustworthy as the host providing it OR the organization operating the host. For exam- ple, a PC in an open lab has few if any controls on it to prevent a user from having this protocol return any identifier the user wants. Likewise, if the host has been compromised the information returned may be completely erroneous and mislead- ing. The Identification Protocol is not intended as an authorization or access control protocol. At best, it provides some additional auditing infor- mation with respect to TCP connections. At worst, it can provide misleading, incorrect, or maliciously incorrect information. The use of the information returned by this pro- tocol for other than auditing is strongly discouraged. Specifically, using Identification Protocol information to make access control deci- sions - either as the primary method (i.e., no other checks) or as an adjunct to other methods may result in a weakening of normal host SMM:08-34 Sendmail Installation and Operation Guide security. An Identification server may reveal information about users, entities, objects or processes which might normally be considered private. An Iden- tification pro- tocol. In some cases your system may not work properly with IDENT support due to a bug in the TCP/IP implementa- tion. The symptoms will be that for some hosts the SMTP connection will be closed almost immediately. If this is true or if you do not want to use IDENT, you should set the IDENT timeout to zero; this will dis- able the IDENT protocol. 3. ARGUMENTS The complete list of arguments to sendmail is described in detail in Appendix A. Some important argu- ments are described here. 3.1. Queue Interval The amount of time between forking a process to run through the queue is defined by the -q flag. If you run with delivery mode set to i or b this can be relatively large, since it will only be relevant when a host that was down comes back up. If you run in q mode it should be relatively short, since it defines the maximum amount of time that a message may sit in the queue. (See also the MinQueueAge option.) RFC 1123 section 5.3.1.1 says that this value should be at least 30 minutes (although that probably doesn't make sense if you use ``queue-only'' mode). Notice: the meaning of the interval time depends on whether normal queue runners or persistent queue runners are used. For the former, it is the time between subsequent starts of a queue run. For the latter, it is the time sendmail waits after a per- sistent queue runner has finished its work to start the next one. Hence for persistent queue runners this interval should be very low, typically no more than two minutes. Sendmail Installation and Operation Guide SMM:08-35 3.2. Daemon Mode If you allow incoming mail over an IPC connec- tion, you should have a daemon running. This should be set by your /etc/rc file using the -bd flag. The -bd flag and the -q flag may be combined in one call: /usr/sbin/sendmail -bd -q30m An alternative approach is to invoke sendmail from inetd(8) (use the -bs -Am flags to ask sendmail to speak SMTP on its standard input and output and to run as MTA). This works and allows you to wrap send- mail in a TCP wrapper program, but may be a bit slower since the configuration file has to be re-read on every message that comes in. If you do this, you still need to have a sendmail running to flush the queue: /usr/sbin/sendmail -q30m 3.3. Forcing the Queue In some cases you may find that the queue has gotten clogged for some reason. You can force a queue run using the -q flag (with no value). It is enter- taining to use the -v flag (verbose) when this is done to watch what happens: /usr/sbin/sendmail -q -v You can also limit the jobs to those with a par- ticular queue identifier, recipient, sender, quaran- tine reason, or queue group using one of the queue modifiers. For example, "-qRberkeley" restricts the queue run to jobs that have the string "berkeley" somewhere in one of the recipient addresses. Simi- larly, "-qSstring" limits the run to particular senders, "-qIstring" limits it to particular queue identifiers, and "-qQstring" limits it to particular quarantined reasons and only operated on quarantined queue items, and "-qGstring" limits it to a particular queue group. The named queue group will be run even if it is set to have 0 runners. You may also place an ! before the I or R or S or Q to indicate that jobs are limited to not including a particular queue identif- ier, recipient or sender. For example, "-q!Rseattle" limits the queue run to jobs that do not have the string "seattle" somewhere in one of the recipient addresses. Should you need to terminate the queue jobs currently active then a SIGTERM to the parent of the SMM:08-36 Sendmail Installation and Operation Guide process (or processes) will cleanly stop the jobs. 3.4. Debugging. You should never run a production sendmail server in debug mode. Many of the debug flags will result in debug output being sent over the SMTP channel unless the option -D is used. This will confuse many mail programs. However, for testing purposes, it can be useful when sending mail manually via telnet to the port you are using while debugging. syn- tax is: debug-flag: -d debug-list debug-list: debug-option [ , debug-option ]* debug-option: debug-categories [ . debug-level ] debug-categories: integer | integer - integer | category-pattern category-pattern: [a-zA-Z_*?][a-zA-Z0-9_*?]* debug-level: integer where spaces are for reading ease only. For example, -d12 Set category 12 to level 1 -d12.3 Set category 12 to level 3 -d3-17 Set categories 3 through 17 to level 1 -d3-17.4 Set categories 3 through 17 to level 4 -dANSI Set category ANSI to level 1 -dsm_trace_*.3 Set all named categories matching sm_trace_* to level 3 For a complete list of the available debug flags you will have to look at the code and the TRACEFLAGS file in the sendmail distribution (they are too dynamic to keep this document up to date). For a list of named debug categories in the sendmail binary, use Sendmail Installation and Operation Guide SMM:08-37 ident /usr/sbin/sendmail | grep Debug 3.5. Changing the Values of Options Options can be overridden using the -o or -O com- mand line flags. For example, /usr/sbin/sendmail -oT2m sets the T (timeout) option to two minutes for this run only; the equivalent line using the long option name is /usr/sbin/sendmail -OTimeout.queuereturn=2m Some options have security implications. Sendmail allows you to set these, but relinquishes its set- user-ID or set-group-ID permissions thereafter[12]. 3.6. Trying a Different Configuration File An alternative configuration file can be speci- fied using the -C flag; for example, /usr/sbin/sendmail -Ctest.cf -oQ/tmp/mqueue uses the configuration file test.cf instead of the default /etc/mail/sendmail.cf. If the -C flag has no value it defaults to sendmail.cf in the current direc- tory. Sendmail gives up set-user-ID root permissions (if it has been installed set-user-ID root) when you use this flag, so it is common to use a publicly writ- able directory (such as /tmp) as the queue directory (QueueDirectory or Q option) while testing. 3.7. Logging Traffic Many SMTP implementations do not fully implement the protocol. For example, some personal computer based SMTPs do not understand continuation lines in reply codes. These can be very hard to trace. If you suspect such a problem, you can set traffic logging ____________________ [12]That is, it sets its effective uid to the real uid; thus, if you are executing as root, as from root's crontab file or during system startup the root permissions will still be honored. SMM:08-38 Sendmail Installation and Operation Guide using the -X flag. For example, /usr/sbin/sendmail -X /tmp/traffic -bd will log all traffic in the file /tmp/traffic. This logs a lot of data very quickly and should NEVER be used during normal operations. After starting up such a daemon, force the errant implementation to send a message to your host. All message traffic in and out of sendmail, including the incoming SMTP traffic, will be logged in this file. 3.8. Testing Configuration Files When you build a configuration table, you can do a certain amount of testing using the "test mode" of sendmail. For example, you could invoke sendmail as: sendmail -bt -Ctest.cf which would read the configuration file "test.cf" and enter test mode. In this mode, you enter lines of the form: rwset address where rwset is the rewriting set you want to use and address is an address to apply the set to. Test mode shows you the steps it takes as it proceeds, finally showing you the address it ends up with. You may use a comma separated list of rwsets for sequential applica- tion of rules to an input. For example: 3,1,21,4 monet:bollard first applies ruleset three to the input "monet:bollard." Ruleset one is then applied to the output of ruleset three, followed similarly by rulesets twenty-one and four. If you need more detail, you can also use the "-d21" flag to turn on more debugging. For example, sendmail -bt -d21.99 turns on an incredible amount of information; a single word address is probably going to print out several pages worth of information. You should be warned that internally, sendmail applies ruleset 3 to all addresses. In test mode you will have to do that manually. For example, older Sendmail Installation and Operation Guide SMM:08-39 versions allowed you to use 0 bruce@broadcast.sony.com This version requires that you use: 3,0 bruce@broadcast.sony.com As of version 8.7, some other syntaxes are avail- able in test mode: .Dxvalue defines macro x to have the indicated value. This is useful when debugging rules that use the $&x syntax. .Ccvalue adds the indicated value to class c. =Sruleset dumps the contents of the indicated ruleset. -ddebug-spec is equivalent to the command-line flag. Version 8.9 introduced more features: ? shows a help message. =M display the known mailers. $m print the value of macro m. $=c print the contents of class c. /mx host returns the MX records for `host'. /parse address parse address, returning the value of crackaddr, and the parsed address. . SMM:08-40 Sendmail Installation and Operation Guide /map mapname key look up `key' in the indicated `mapname'. /quit quit address test mode. 3.9. exist- ing processes from using the status information that they already have. 4. TUNING There are a number of configuration parameters you may want to change, depending on the requirements of your site. Most of these are set using an option in the confi- guration 5 for more details. Sendmail Installation and Operation Guide SMM:08-41 4.1. Timeouts All time intervals are set using a scaled syntax. For example, "10m" represents ten minutes, whereas "2h30m" represents two and a half hours. The full set of scales is: s seconds m minutes h hours d days w weeks 4.1.1. Queue interval The argument to the -q flag specifies how often a sub-daemon will run the queue. This is typ- ically set to between fifteen minutes and one hour. If not set, or set to zero, the queue will not be run automatically. RFC 1123 section 5.3.1.1 recom- mends that this be at least 30 minutes. Should you need to terminate the queue jobs currently active then a SIGTERM to the parent of the process (or processes) will cleanly stop the jobs. 4.1.2. Read timeouts Timeouts all have option names "Timeout.suboption". Most of these control SMTP operations. The recognized suboptions, their default values, and the minimum values allowed by RFC 2821 section 4.5.3.2 (or RFC 1123 section 5.3.2) are: connect The time to wait for an SMTP connection to open (the connect(2) system call) [0, unspecified]. If zero, uses the kernel default. In no case can this option extend the timeout longer than the kernel provides, but it can shorten it. This is to get around kernels that provide an absurdly long connection timeout (90 minutes in one case). iconnect The same as connect, except it applies only to the initial attempt to connect to a host for a given message [0, unspeci- fied]. The concept is that this should be very short (a few seconds); hosts that are well connected and responsive will thus be serviced immediately. Hosts that are slow will not hold up other SMM:08-42 Sendmail Installation and Operation Guide deliveries in the initial delivery attempt. aconnect [0, unspecified] The overall timeout waiting for all connection for a single delivery attempt to succeed. If 0, no overall limit is applied. This can be used to restrict the total amount of time trying to connect to a long list of host that could accept an e-mail for the reci- pient. This timeout does not apply to FallbackMXhost, i.e., if the time is exhausted, the FallbackMXhost is tried next. initial The wait for the initial 220 greeting message [5m, 5m]. helo The wait for a reply from a HELO or EHLO command [5m, unspecified]. This may require a host name lookup, so five minutes is probably a reasonable minimum. mail- The wait for a reply from a MAIL command [10m, 5m]. rcpt- The wait for a reply from a RCPT command [1h, 5m]. This should be long because it could be pointing at a list that takes a long time to expand (see below). datainit- The wait for a reply from a DATA command [5m, 2m]. datablock-= The wait for reading a data block (that is, the body of the message). [1h, 3m]. This should be long because it also applies to programs piping input to send- mail which have no guarantee of prompt- ness. datafinal- The wait for a reply from the dot ter- minating a message. [1h, 10m]. If this is shorter than the time actually needed for the receiver to deliver the message, duplicates will be generated. This is discussed in RFC 1047. rset The wait for a reply from a RSET command [5m, unspecified]. Sendmail Installation and Operation Guide SMM:08-43 quit The wait for a reply from a QUIT command [2m, unspecified]. misc The wait for a reply from miscellaneous (but short) commands such as NOOP (no- operation) and VERB (go into verbose mode). [2m, unspecified]. command-= In server SMTP, the time to wait for another command. [1h, 5m]. ident= The timeout waiting for a reply to an IDENT query [5s[13], unspecified]. lhlo The wait for a reply to an LMTP LHLO com- mand [2m, unspecified]. auth The timeout for a reply in an SMTP AUTH dialogue [10m, unspecified]. starttls The timeout for a reply to an SMTP STARTTLS command and the TLS handshake [1h, unspecified]. fileopen= The timeout for opening .forward and :include: files [60s, none]. control= The timeout for a complete control socket transaction to complete [2m, none]. hoststatus= How long status information about a host (e.g., host down) will be cached before it is considered stale [30m, unspeci- fied]. resolver.retrans= The resolver's retransmission time inter- val (in seconds) [varies]. Sets both Timeout.resolver.retrans.first and Timeout.resolver.retrans.normal. resolver.retrans.first= The resolver's retransmission time inter- val (in seconds) for the first attempt to deliver a message [varies]. ____________________ [13]On some systems the default is zero to turn the pro- tocol off entirely. SMM:08-44 Sendmail Installation and Operation Guide resolver.retrans.normal= The resolver's retransmission time inter- val (in seconds) for all resolver lookups except the first delivery attempt [varies]. resolver.retry= The number of times to retransmit a resolver query. Sets both Timeout.resolver.retry.first and Timeout.resolver.retry.normal [varies]. resolver.retry.first= The number of times to retransmit a resolver query for the first attempt to deliver a message [varies]. resolver.retry.normal= The number of times to retransmit a resolver query for all resolver lookups except the first delivery attempt [varies]. For compatibility with old configuration files, if no suboption is specified, all the timeouts marked with an asterick (-) are set to the indicated value. All but those marked with a pound sign (=) apply to client SMTP. For example, the lines: O Timeout.command=25m O Timeout.datablock=3h sets the server SMTP command timeout to 25 minutes and the input data block timeout to three hours. 4.1.3. Message timeouts After sitting in the queue for a few days, an undeliverable message will time out. This is to insure that at least the sender is aware of the inability to send a message. The timeout is typi- cally). Sendmail Installation and Operation Guide SMM:08-45. If the message has a normal (default) precedence and it is a delivery status notification (DSN), Timeout.queuereturn.dsn and Timeout.queuewarn.dsn can be used to give an alternative warn and return time for DSNs. OT5d/4h causes email to fail after five days, but a warning message will be sent after four hours. This should be large enough that the message will have been tried several times. 4.2. Forking During Queue Runs. SMM:08-46 Sendmail Installation and Operation Guide If the ForkEachJob option is set, sendmail cannot use connection caching. 4.3. Queue Priorities Every message is assigned a priority when it is first instantiated, consisting of the message size (in bytes) offset by the message class (which is deter- mined mes- sagesxClassFactor)+(nrcptxRecipientFactor) (Remember, higher values for this parameter actually mean that the job will be treated with lower prior- ity.). 4.4. Load Limiting Sendmail Installation and Operation Guide SMM:08-47 divided by the difference in the current load average and the QueueLA option plus one is less than the priority of the message - that is, the message is queued iff: __u__u__F__c__o__r pri>LA. 4.5. Resource Limits Sendmail has several parameters to control resource usage. Besides those mentionted in the previ- ous section, there are at least MaxDaemonChildren, ConnectionRateThrottle, MaxQueueChildren, and MaxRun- nersPerQueue. The latter two limit the number of send- mail processes that operate on the queue. These are discussed in the section ``Queue Group Declaration''. The former two can be used to limit the number of incoming connections. Their appropriate values depend on the host operating system and the hardware, e.g., amount of memory. In many situations it might be use- ful to set limits to prevent to have too many sendmail processes, however, these limits can be abused to mount a denial of service attack. For example, if Max- DaemonChildren=10 then an attacker needs to open only 10 SMTP sessions to the server, leave them idle for most of the time, and no more connections will be accepted. If this option is set then the timeouts used in a SMTP session should be lowered from their default values to their minimum values as specified in RFC 2821 and listed in section 4.1.2. 4.6. Measures against Denial of Service Attacks Sendmail has some built-in measures against sim- ple denial of service (DoS) attacks. The SMTP server by default slows down if too many bad commands are issued or if some commands are repeated too often within a session. Details can be found in the source file sendmail/srvrsmtp.c by looking for the macro SMM:08-48 Sendmail Installation and Operation Guide definitions of MAXBADCOMMANDS, MAXNOOPCOMMANDS, MAX- HELOCOMMANDS, MAXVRFYCOMMANDS, and MAXETRNCOMMANDS. If an SMTP command is issued more often than the corresponding MAXcmdCOMMANDS value, then the response is delayed exponentially, starting with a sleep time of one second, up to a maximum of four minutes (as defined by MAXTIMEOUT). If the option MaxDaemonChil- dren is set to a value greater than zero, then this could make a DoS attack even worse since it keeps a connection open longer than necessary. Therefore a connection is terminated with a 421 SMTP reply code if the number of commands exceeds the limit by a factor of two and MAXBADCOMMANDS is set to a value greater than zero (the default is 25). 4.7. Delivery Mode There are a number of delivery modes that send- mail can operate in, set by the DeliveryMode (d) con- figuration mes- sages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode "b" is the usual default.. Sendmail Installation and Operation Guide SMM:08-49 4.8. Log Level The level of logging can be set for sendmail. The default using a standard configuration table is level 9. The levels are as follows: 0 Minimal logging. 1 Serious system failures and potential security problems. 2 Lost communications (network problems) and proto- col failures. 3 Other serious failures, malformed addresses, transient forward/include errors, connection timeouts. 4 Minor failures, out of date alias databases, con- nection rejections via check_ rulesets. 5 Message collection statistics. 6 Creation of error messages, VRFY and EXPN com- mands. 7 Delivery failures (host or user unknown, etc.). 8 Successful deliveries and alias database rebuilds. 9 Messages being deferred (due to a host being down, etc.).. SMM:08-50 Sendmail Installation and Operation Guide 30 Lost locks (only if using lockf instead of flock). Additionally, values above 64 are reserved for extremely verbose debugging output. No normal site would ever set these. 4.9. File Modes The modes used for files depend on what func- tionality. 4.9.1. To suid or not to suid?. How- ever, this will cause mail processing to be accounted (using sa(8)) to root rather than to the user sending the mail. A middle ground is to set the RunAsUser option. This causes sendmail to become the indi- cated its uid, delivery to programs or files will be marked as unsafe, e.g., undeliver- able, in .forward, aliases, and :include: files. Administrators can override this by setting the Sendmail Installation and Operation Guide SMM:08-51 DontBlameSendmail option to the setting NonRoot- SafeAddr. RunAsUser is probably best suited for firewall configurations that don't have regular user logins. If the option is used on a system which performs local delivery, then the local delivery agent must have the proper permissions (i.e., usually set-user-ID root) since it will be invoked by the RunAsUser, not by root. 4.9.2. Turning off security checks. Also, sendmail will refuse to create a new aliases database in an unsafe directory. You can get around this by manually creating the data- base file as a trusted user ahead of time and then rebuilding the aliases database with newaliases. If you are quite sure that your configuration is safe and you want sendmail to avoid these secu- rity checks, you can turn off certain checks using the DontBlameSendmail option. This option takes one or more names that disable checks. In the descrip- tions that follow, "unsafe directory" means a directory that is writable by anyone other than the owner. The values are: Safe No special handling. AssumeSafeChown Assume that the chown system call is res- tricted to root. Since some versions of UNIX permit regular users to give away their files to other users on some filesystems, sendmail often cannot assume that a given file was created by the owner, particularly when it is in a writable directory. You can set this flag if you know that file giveaway is restricted on your system. ClassFileInUnsafeDirPath When reading class files (using the F line in the configuration file), allow files that are in unsafe directories. DontWarnForwardFileInUnsafeDirPath Prevent logging of unsafe directory path warn- ings for non-existent forward files. SMM:08-52 Sendmail Installation and Operation Guide ErrorHeaderInUnsafeDirPath Allow the file named in the ErrorHeader option to be in an unsafe directory. FileDeliveryToHardLink Allow delivery to files that are hard links. FileDeliveryToSymLink Allow delivery to files that are symbolic links. ForwardFileInGroupWritableDirPath Allow .forward files in group writable direc- tories. ForwardFileInUnsafeDirPath Allow .forward files in unsafe directories. ForwardFileInUnsafeDirPathSafe Allow a .forward file that is in an unsafe directory to include references to program and files. GroupReadableKeyFile Accept a group-readable key file for STARTTLS. GroupReadableSASLDBFile Accept a group-readable Cyrus SASL password file. GroupWritableAliasFile Allow group-writable alias files. GroupWritableDirPathSafe Change the definition of "unsafe directory" to consider group-writable directories to be safe. World-writable directories are always unsafe. GroupWritableForwardFile Allow group writable .forward files. GroupWritableForwardFileSafe Accept group-writable .forward files as safe for program and file delivery. GroupWritableIncludeFile Allow group wriable :include: files. GroupWritableIncludeFileSafe Accept group-writable :include: files as safe for program and file delivery. Sendmail Installation and Operation Guide SMM:08-53 GroupWritableSASLDBFile Accept a group-writable Cyrus SASL password file. HelpFileInUnsafeDirPath Allow the file named in the HelpFile option to be in an unsafe directory. IncludeFileInGroupWritableDirPath Allow :include: files in group writable direc- tories. IncludeFileInUnsafeDirPath Allow :include: files in unsafe directories. IncludeFileInUnsafeDirPathSafe Allow a :include: file that is in an unsafe directory to include references to program and files. InsufficientEntropy Try to use STARTTLS even if the PRNG for OpenSSL is not properly seeded despite the security problems. LinkedAliasFileInWritableDir Allow an alias file that is a link in a writ- able directory. LinkedClassFileInWritableDir Allow class files that are links in writable directories. LinkedForwardFileInWritableDir Allow .forward files that are links in writ- able directories. LinkedIncludeFileInWritableDir Allow :include: files that are links in writ- able directories. LinkedMapInWritableDir Allow map files that are links in writable directories. This includes alias database files. LinkedServiceSwitchFileInWritableDir Allow the service switch file to be a link even if the directory is writable. MapInUnsafeDirPath Allow maps (e.g., hash, btree, and dbm files) in unsafe directories. This includes alias SMM:08-54 Sendmail Installation and Operation Guide database files. NonRootSafeAddr Do not mark file and program deliveries as unsafe if sendmail is not running with root privileges. RunProgramInUnsafeDirPath Run programs that are in writable directories without logging a warning. RunWritableProgram Run programs that are group- or world-writable without logging a warning. TrustStickyBit Allow group or world writable directories if the sticky bit is set on the directory. Do not set this on systems which do not honor the sticky bit on directories. WorldWritableAliasFile Accept world-writable alias files. WorldWritableForwardfile Allow world writable .forward files. WorldWritableIncludefile Allow world wriable :include: files. WriteMapToHardLink Allow writes to maps that are hard links. WriteMapToSymLink Allow writes to maps that are symbolic links. WriteStatsToHardLink Allow the status file to be a hard link. WriteStatsToSymLink Allow the status file to be a symbolic link. 4.10. Connection Caching When processing the queue, sendmail will try to keep the last few open connections open to avoid startup and shutdown costs. This only applies to IPC and LPC connections. When trying to open a connection the cache is first searched. If an open connection is found, it is probed to see if it is still active by sending a RSET command. It is not an error if this fails; instead, Sendmail Installation and Operation Guide SMM:08-55. 4.11. Name Server Access (not- ably SMM:08-56 Sendmail Installation and Operation Guide O ResolverOptions=+AAONLY -DNSRCH turns on the AAONLY (accept authoritative answers only) and turns off the DNSRCH (search the domain path) options. Most resolver libraries default DNSRCH, DEFNAMES, and RECURSE flags on and all others off. If NETINET6 is enabled, most libraries default to USE_INET6 as well. You can also include "HasWild- cardMX" to specify that there is a wildcard MX record matching your domain; this turns off MX matching when canonifying names, which can lead to inappropriate canonifications. Use "WorkAroundBrokenAAAA" when faced with a broken nameserver that returns SERVFAIL (a tem- porary failure) on T_AAAA (IPv6) lookups during host- name canonification. Notice: it might be necessary to apply the same (or similar) options to submit.cf too. dif- ferent depart- ment. Sendmail Installation and Operation Guide SMM:08-57 searched when linking. 4.12. Moving the Per-User Forward Files O ForwardPath=/var/forward/$u:$z/.forward.$w would first look for a file with the same name as the user's login in /var/forward; if that is not found (or is inaccessible) the file ``.forward.machinename'' in the user's home directory is searched. A truly per- verse site could also search by sender by using $r, $s, or $f. If you create a directory such as /var/forward, it should be mode 1777 (that is, the sticky bit should be set). Users should create the files mode 0644. 0755 and create empty files for each user, owned by that user, mode 0644. If you do this, you don't have to set the DontBlameSendmail options indicated above. 4.13. Free Space SMM:08-58 Sendmail Installation and Operation Guide processed without difficulty. 4.14. Maximum Message Size To avoid overflowing your system with a large message, the MaxMessageSize option can be set to set an absolute limit on the size of any one message. This will be advertised in the ESMTP dialogue and checked during message collection. 4.15. Privacy Flags The PrivacyOptions (p) option allows you to set certain ``privacy'' flags. Actually, many of them don't give you any extra privacy, rather just insist- ing that client SMTP servers use the HELO command before using certain commands or adding extra headers to indicate possible spoof attempts. EXPN com- mand. The flags are detailed in section 5.6. 4.16. Send to Me Too. 5. THE WHOLE SCOOP ON THE CONFIGURATION FILE This section describes the configuration file in detail. There is one point that should be made clear immedi- ately: the syntax of the configuration file is designed to be reasonably easy to parse, since this is done every time sendmail starts up, rather than easy for a human to read or write. The configuration file should be generated via the method described in cf/README, it should not be Sendmail Installation and Operation Guide SMM:08-59 edited directly unless someone is familiar with the internals of the syntax described here and it is not pos- sible to achieve the desired result via the default method. The configuration file is organized as a series of lines, each of which begins with a single character defining the semantics for the rest of the line. Lines beginning with a space or a tab are continuation lines (although the semantics are not well defined in many places). Blank lines and lines beginning with a sharp symbol (`#') are comments. 5.1. R and S -- Rewriting Rules The core of address parsing are the rewriting rules. These are an ordered production system. Send- mail scans through the set of rewriting rules looking for a match on the left hand side (LHS) of the rule. When a rule matches, the address is replaced by the right hand side (RHS) of the rule. There are several sets of rewriting rules. Some of the rewriting sets are used internally and must have specific semantics. Other rewriting sets do not have specifically assigned semantics, and may be referenced by the mailer definitions or by other rewriting sets. The syntax of these two commands are: Sn Sets the current ruleset being collected to n. If you begin a ruleset more than once it appends to the old definition. Rlhs rhs comments The fields must be separated by at least one tab char- acter; there may be embedded spaces in the fields. The lhs is a pattern that is applied to the input. If it matches, the input is rewritten to the rhs. The com- ments are ignored. Macro expansions of the form $x are performed when the configuration file is read. A literal $ can be included using $$. Expansions of the form $&x are performed at run time using a somewhat less general algorithm. This is intended only for referencing internally defined macros such as $h that are changed at runtime. SMM:08-60 Sendmail Installation and Operation Guide 5.1.1. The left hand side The left hand side of rewriting rules contains a pattern. Normal words are simply matched directly. Metasyntax is introduced using a dollar sign. The metasymbols are: $* is the index in the LHS. For example, if the LHS: $-:$+ is applied to the input: UCBARPA:eric the rule will match, and the values passed to the RHS will be: $1 UCBARPA $2 eric Additionally, the LHS can include $@ to match zero tokens. This is not bound to a $n on the RHS, and is normally only used when it stands alone in order to match the null input. 5.1.2. The right hand side When the left hand side of a rewriting rule matches, the input is deleted and replaced by the right hand side. Tokens are copied directly from the RHS unless they begin with a dollar sign. Metasymbols are: $n Substitute indefinite token n from LHS $[name$] Canonicalize name $(map key $@arguments $:default $) Generalized keyed mapping function $>n "Call" ruleset n $#mailer Resolve to mailer $@host Specify host $:user Specify user Sendmail Installation and Operation Guide SMM:08-61 The $n syntax substitutes the corresponding value from a $+, $-, $*, $=, or $~ match on the LHS. It may be used anywhere. A host name enclosed between $[ and $] is looked up in the host database(s) and replaced by the canonical name[14]. For example, "$[ftp$]" might become "" and "$[[128.32.130.2]$]" would become "vangogh.CS.Berkeley.EDU." Sendmail recognizes its numeric IP address without calling the name server and replaces it with its canonical name. The $( ... $) syntax is a more general form of lookup; it uses a named map instead of an implicit map. If no lookup is found, the indicated default is inserted; if no default is specified and no lookup matches, the value is left unchanged. The arguments are passed to the map for possible use. The $>n syntax causes the remainder of the line to be substituted as usual and then passed as the argument to ruleset n. The final value of ruleset n then becomes the substitution for this rule. The $> syntax expands everything after the ruleset name to the end of the replacement string and then passes that as the initial input to the ruleset. Recursive calls are allowed. For example, $>0 $>3 $1 expands $1, passes that to ruleset 3, and then passes the result of ruleset 3 to ruleset 0. The $# syntax should only be used in ruleset zero, a subroutine of ruleset zero, or rulesets that return decisions (e.g., check_rcpt). It causes evaluation of the ruleset to terminate immediately, and signals to sendmail that the address has com- pletely resolved. The complete syntax for ruleset 0 is: $#mailer $@host $:user This specifies the {mailer, host, user} 3-tuple necessary to direct the mailer. Note: the third element ( user ) is often also called address part. If the mailer is local the host part may be ____________________ [14]This is actually completely equivalent to $(host hostname$). In particular, a $: default can be used. SMM:08-62 Sendmail Installation and Operation Guide omitted[15]. The mailer must be a single word, but the host and user may be multi-part. If the mailer is the built-in IPC mailer, the host may be a colon-separated list of hosts that are searched in order for the first working address (exactly like MX records). The user is later rewritten by the mailer-specific envelope rewriting set and assigned to the $u macro. As a special case, if the mailer specified has the F=@ flag specified and the first character of the $: value is "@", the "@" is stripped off, and a flag is set in the address descriptor that causes sendmail to not do ruleset 5 processing. Normally, a rule that matches is retried, that is, the rule loops until it fails. A RHS may also be preceded by a $@ or a $: to change this behavior. A $@ prefix causes the ruleset to return with the remainder of the RHS as the value. A $: prefix causes the rule to terminate immediately, but the ruleset to continue; this can be used to avoid continued application of a rule. The prefix is stripped before continuing. The $@ and $: prefixes may precede a $> spec; for example: R$+ $: $>7 $1 matches anything, passes that to ruleset seven, and continues; the $: is necessary to avoid an infinite loop. Substitution occurs in the order described, that is, parameters from the LHS are substituted, hostnames are canonicalized, "subroutines" are called, and finally $#, $@, and $: are processed. 5.1.3. Semantics of rewriting rule sets There are six rewriting sets that have specific semantics. Five of these are related as depicted by figure 1. Ruleset three should turn the address into "canonical form." This form should have the basic ____________________ [15]You may want to use it for special "per user" exten- sions. For example, in the address "jgm+foo@CMU.EDU"; the "+foo" part is not part of the user name, and is passed to the local mailer for local use. Sendmail Installation and Operation Guide SMM:08-63 ____________________________________________________________ +---+ -->| 0 |-->resolved address / +---+ / +---+ +---+ / ---->| 1 |-->| S |-- +---+ / +---+ / +---+ +---+ \ +---+ addr-->| 3 |-->| D |-- --->| 4 |-->msg +---+ +---+ \ +---+ +---+ / +---+ --->| 2 |-->| R |-- +---+ +---+ Figure 1 -- Rewriting set semantics D -- sender domain addition S -- mailer-specific sender rewriting R -- mailer-specific recipient rewriting ____________________________________________________________ syntax: local-part@host-domain-spec Ruleset three is applied by sendmail before doing anything with any address. If no "@" sign is specified, then the host- domain-spec may be appended (box "D" in Figure 1) from the sender address (if the C flag is set in the mailer definition corresponding to the sending mailer). Ruleset zero is applied after ruleset three to addresses that are going to actually specify reci- pients. It must resolve to a {mailer, host, address} triple. The mailer must be defined in the mailer definitions from the configuration file. The host is defined into the $h macro for use in the argv expansion of the specified mailer. Notice: since the envelope sender address will be used if a delivery status notification must be send, i.e., is may specify a recipient, it is also run through ruleset zero. If ruleset zero returns a temporary error 4xy then delivery is deferred. This can be used to temporarily disable delivery, e.g., based on the time of the day or other varying parameters. It should not be used to quarantine e-mails. Rulesets one and two are applied to all sender and recipient addresses respectively. They are applied before any specification in the mailer SMM:08-64 Sendmail Installation and Operation Guide definition. They must never resolve. Ruleset four is applied to all addresses in the message. It is typically used to translate internal to external form. In addition, ruleset 5 is applied to all local addresses (specifically, those that resolve to a mailer with the `F=5' flag set) that do not have aliases. This allows a last minute hook for local names. 5.1.4. Ruleset hooks A few extra rulesets are defined as "hooks" that can be defined to get special features. They are all named rulesets. The "check_*" forms all give accept/reject status; falling off the end or returning normally is an accept, and resolving to $#error is a reject or quarantine. Quarantining is chosen by specifying quarantine in the second part of the mailer triplet: $#error $@ quarantine $: Reason for quarantine Many of these can also resolve to the special mailer name $#discard; this accepts the message as though it were successful but then discards it without delivery. Note, this mailer cannot be chosen as a mailer in ruleset 0. Note also that all "check_*" rulesets have to deal with temporary failures, especially for map lookups, themselves, i.e., they should return a temporary error code or at least they should make a proper decision in those cases. 5.1.4.1. check_relay The check_relay ruleset is called after a connection is accepted by the daemon. It is not called when sendmail is started using the -bs option. It is passed client.host.name $| client.host.address where $| is a metacharacter separating the two parts. This ruleset can reject connections from various locations. Note that it only checks the connecting SMTP client IP address and hostname. It does not check for third party message relay- ing. The check_rcpt ruleset discussed below usu- ally does third party message relay checking. Sendmail Installation and Operation Guide SMM:08-65 5.1.4.2. check_mail The check_mail ruleset is passed the user name parameter of the SMTP MAIL command. It can accept or reject the address. 5.1.4.3. check_rcpt The check_rcpt ruleset is passed the user name parameter of the SMTP RCPT command. It can accept or reject the address. 5.1.4.4. check_data The check_data ruleset is called after the SMTP DATA command, its parameter is the number of recipients. It can accept or reject the com- mand. 5.1.4.5. check_compat The check_compat ruleset is passed sender-address $| recipient-address where $| is a metacharacter separating the addresses. It can accept or reject mail transfer between these two addresses much like the checkcompat() function. Note: while other check_* rulesets are invoked during the SMTP mail receiption stage (i.e., in the SMTP server), check_compat is invoked during the mail delivery stage. 5.1.4.6. check_eoh The check_eoh ruleset is passed number-of-headers $| size-of-headers where $| is a metacharacter separating the numbers. These numbers can be used for size com- parisons with the arith map. The ruleset is triggered after all of the headers have been read. It can be used to correlate information gathered from those headers using the macro storage map. One possible use is to check for a missing header. For example: SMM:08-66 Sendmail Installation and Operation Guide Kstorage macro Keep in mind the Message-Id: header is not a required header and is not a guaranteed spam indicator. This ruleset is an example and should probably not be used in production. 5.1.4.7. check_eom The check_eom ruleset is called after the end of a message, its parameter is the message size. It can accept or reject the message. 5.1.4.8. check_etrn The check_etrn ruleset is passed the param- eter of the SMTP ETRN command. It can accept or reject the command. 5.1.4.9. check_expn The check_expn ruleset is passed the user name parameter of the SMTP EXPN command. It can accept or reject the address. 5.1.4.10. check_vrfy The check_vrfy ruleset is passed the user name parameter of the SMTP VRFY command. It can accept or reject the command. Sendmail Installation and Operation Guide SMM:08-67 5.1.4.11. trust_auth The trust_auth ruleset is passed the AUTH= parameter of the SMTP MAIL command. It is used to determine whether this value should be trusted. In order to make this decision, the ruleset may make use of the various ${auth_*} macros. If the ruleset does resolve to the "error" mailer the AUTH= parameter is not trusted and hence not passed on to the next relay. 5.1.4.12. tls_client The tls_client ruleset is called when send- mail acts as server, after a STARTTLS command has been issued, and from check_mail. The param- eter is the value of ${verify} and STARTTLS or MAIL, respectively. If the ruleset does resolve to the "error" mailer, the appropriate error code is returned to the client. 5.1.4.13. tls_server The tls_server ruleset is called when send- mail acts as client after a STARTTLS command (should) have been issued. The parameter is the value of ${verify}. If the ruleset does resolve to the "error" mailer, the connection is aborted (treated as non-deliverable with a permanent or temporary error). 5.1.4.14. tls_rcpt The tls_rcpt ruleset is called each time before a RCPT TO command is sent. The parameter is the current recipient. If the ruleset does resolve to the "error" mailer, the RCPT TO com- mand is suppressed (treated as non-deliverable with a permanent or temporary error). This ruleset allows to require encryption or verifi- cation of the recipient's MTA even if the mail is somehow redirected to another host. For exam- ple, sending mail to luke@endmail.org may get redirected to a host named death.star and hence the tls_server ruleset won't apply. By introduc- ing per recipient restrictions such attacks (e.g., via DNS spoofing) can be made impossible. See cf/README how this ruleset can be used. SMM:08-68 Sendmail Installation and Operation Guide 5.1.4.15. srv_features The srv_features ruleset is called with the connecting client's host name when a client con- nects to sendmail. This ruleset should return $# followed by a list of options (single characters delimited by white space). If the return value starts with anything else it is silently ignored. Generally upper case characters turn off a feature while lower case characters turn it on. Option `S' causes the server not to offer STARTTLS, which is useful to interact with MTAs/MUAs that have broken STARTTLS implementa- tions by simply not offering it. `V' turns off the request for a client certificate during the TLS handshake. Options `A' and `P' suppress SMTP AUTH and PIPELINING, respectively. `c' is the equivalent to AuthOptions=p, i.e., it doesn't permit mechanisms susceptible to simple passive attack (e.g., PLAIN, LOGIN), unless a security layer is active. Option `l' requires SMTP AUTH for a connection. Options 'B', 'D', 'E', and 'X' suppress SMTP VERB, DSN, ETRN, and EXPN, respec- tively. certificate v Request a client certificate (default) X Do not offer EXPN x Offer EXPN (default) Note: the entries marked as ``(default)'' may require that some configuration has been made, e.g., SMTP AUTH is only available if properly configured. Moreover, many options can be changed on a global basis via other settings as explained in this document, e.g., via Sendmail Installation and Operation Guide SMM:08-69 DaemonPortOptions. The ruleset may return `$#temp' to indicate that there is a temporary problem determining the correct features, e.g., if a map is unavail- able. In that case, the SMTP server issues a temporary failure and does not accept email. 5.1.4.16. try_tls The try_tls ruleset is called when sendmail connects to another MTA. If the ruleset does resolve to the "error" mailer, sendmail does not try STARTTLS even if it is offered. This is use- ful to interact with MTAs that have broken STARTTLS implementations by simply not using it. 5.1.4.17. authinfo The authinfo ruleset is called when send- mail tries to authenticate to another MTA. It should return $# followed by a list of tokens that are used for SMTP AUTH. If the return value starts with anything else it is silently ignored. Each token is a tagged string of the form: "TDstring" (including the quotes), where T Tag which describes the item D Delimiter: ':' simple text follows '=' string is base64 encoded string Value of the item Valid values for the tag are: U user (authorization) id I authentication id P password R realm M list of mechanisms delimited by spaces If this ruleset is defined, the option Defaul- tAuthInfo is ignored (even if the ruleset does not return a ``useful'' result). 5.1.4.18. queuegroup The queuegroup ruleset is used to map a recipient address to a queue group name. The input for the ruleset is a recipient address as specified by the SMTP RCPT command. The ruleset should return $# followed by the name of a queue group. If the return value starts with anything else it is silently ignored. See the section SMM:08-70 Sendmail Installation and Operation Guide about ``Queue Groups and Queue Directories'' for further information. 5.1.4.19. greet_pause The greet_pause ruleset is used to specify the amount of time to pause before sending the initial SMTP 220 greeting. If any traffic is received during that pause, an SMTP 554 rejec- tion response is given instead of the 220 greet- ing and all SMTP commands are rejected during that connection. This helps protect sites from open proxies and SMTP slammers. The ruleset should return $# followed by the number of mil- liseconds (thousandths of a second) to pause. If the return value starts with anything else or is not a number, it is silently ignored. Note: this ruleset is not invoked (and hence the feature is disabled) when the smtps (SMTP over SSL) is used, i.e., the s modifier is set for the daemon via DaemonPortOptions, because in this case the SSL handshake is performed before the greeting is sent. 5.1.5. IPC mailers Some special processing occurs if the ruleset zero resolves to an IPC mailer (that is, a mailer that has "[IPC]" listed as the Path in the M confi- guration line. The host name passed after "$@" has MX expansion performed if not delivering via a named socket; this looks the name up in DNS to find alternate delivery sites. The host name can also be provided as a dotted quad or an IPv6 address in square brackets; for example: [128.32.149.78] or [IPv6:2002:c0a8:51d2::23f4] This causes direct conversion of the numeric value to an IP host address. The host name passed in after the "$@" may also be a colon-separated list of hosts. Each is separately MX expanded and the results are con- catenated to make (essentially) one long MX list. The intent here is to create "fake" MX records that are not published in DNS for private internal Sendmail Installation and Operation Guide SMM:08-71 networks. As a final special case, the host name can be passed in as a text string in square brackets: [ucbvax.berkeley.edu] This form avoids the MX mapping. N.B.: This is intended only for situations where you have a net- work firewall. 5.2. D -- Define Macro Macros are named with a single character or with a word in {braces}. The names ``x'' and ``{x}'' denote the same macro for every single character ``x''. Sin- gle character names may be selected from the entire ASCII set, but user-defined macros should be selected from the set of upper case letters only. Lower case letters and special symbols are used internally. Long names beginning with a lower case letter or a punctua- tion character are reserved for use by sendmail, so user-defined long macro names should begin with an upper case letter. The syntax for macro definitions is: Dxval where x is the name of the macro (which may be a sin- gle character or a word in braces) and val is the value it should have. There should be no spaces given that do not actually belong in the macro value. Macros are interpolated using the construct $x, where x is the name of the macro to be interpolated. This interpolation is done when the configuration file is read, except in M lines. The special construct $&x can be used in R lines to get deferred interpolation. Conditionals can be specified using the syntax: $?x text1 $| text2 $. This interpolates text1 if the macro $x is set and non-null, and text2 otherwise. The "else" ($|) clause may be omitted. SMM:08-72 Sendmail Installation and Operation Guide The following macros are defined and/or used internally by sendmail for interpolation into argv's for mailers or for other contexts. The ones marked - are information passed into sendmail[16], the ones marked = are information passed both in and out of sendmail, and the unmarked macros are passed out of sendmail but are not otherwise used internally. These macros are: $a The origination date in RFC 822 format. This is extracted from the Date: line. $b The current date in RFC 822 format. $c The hop count. This is a count of the number of Received: lines plus the value of the -h command line flag. $d The current date in UNIX (ctime) format. $e- (Obsolete; use SmtpGreetingMessage option instead.) The SMTP entry message. This is printed out when SMTP starts up. The first word must be the $j macro as specified by RFC 821. Defaults to "$j Sendmail $v ready at $b". Commonly redefined to include the configuration version number, e.g., "$j Sendmail $v/$Z ready at $b" $f The envelope sender (from) address. $g The sender address relative to the recipient. For example, if $f is "foo", $g will be "host!foo", "foo@host.domain", or whatever is appropriate for the receiving mailer. $h The recipient host. This is set in ruleset 0 from the $@ field of a parsed address. $i The queue id, e.g., "f344MXxp018717". $j= The "official" domain name for this site. This is fully qualified if the full qualification can be found. It must be redefined to be the fully qual- ified domain name if your system is not config- ured so that information can find it automati- cally. ____________________ [16]As of version 8.6, all of these macros have reason- able defaults. Previous versions required that they be de- fined. Sendmail Installation and Operation Guide SMM:08-73 $k The UUCP node name (from the uname system call). $l- (Obsolete; use UnixFromLine option instead.) The format of the UNIX from line. Unless you have changed the UNIX mailbox format, you should not change the default, which is "From $g $d". $m The domain part of the gethostname return value. Under normal circumstances, $j is equivalent to $w.$m. $n- The name of the daemon (for error messages). Defaults to "MAILER-DAEMON". $o- (Obsolete: use OperatorChars option instead.) The set of "operators" in addresses. A list of char- acters which will be considered tokens and which will separate tokens when doing parsing. For example, if "@" were in the $o macro, then the input "a@b" would be scanned as three tokens: "a," "@," and "b." Defaults to ".:@[]", which is the minimum set necessary to do RFC 822 parsing; a richer set of operators is ".:%@!/[]", which adds support for UUCP, the %-hack, and X.400 addresses. $p Sendmail's process id. $q- Default format of sender address. The $q macro specifies how an address should appear in a mes- sage when it is defaulted. Defaults to "<$g>". It is commonly redefined to be "$?x$x <$g>$|$g$." or "$g$?x ($x)$.", corresponding to the following two formats: Eric Allman <eric@CS.Berkeley.EDU> eric@CS.Berkeley.EDU (Eric Allman) Sendmail properly quotes names that have special characters if the first form is used. $r Protocol used to receive the message. Set from the -p command line flag or by the SMTP server code. $s Sender's host name. Set from the -p command line flag or by the SMTP server code (in which case it is set to the EHLO/HELO parameter). $t A numeric representation of the current time in the format YYYYMMDDHHmm (4 digit year 1900-9999, 2 digit month 01-12, 2 digit day 01-31, 2 digit hours 00-23, 2 digit minutes 00-59). SMM:08-74 Sendmail Installation and Operation Guide $u The recipient user. $v The version number of the sendmail binary. $w= The hostname of this site. This is the root name of this host (but see below for caveats). $x The full name of the sender. $z The home directory of the recipient. $_ The validated sender address. See also ${client_resolve}. ${addr_type} The type of the address which is currently being rewritten. This macro contains up to three char- acters, the first is either `e' or `h' for envelope/header address, the second is a space, and the third is either `s' or `r' for sender/recipient address. ${alg_bits} The maximum keylength (in bits) of the symmetric encryption algorithm used for a TLS connection. This may be less than the effective keylength, which is stored in ${cipher_bits}, for ``export controlled'' algorithms. ${auth_authen} The client's authentication credentials as deter- mined by authentication (only set if successful). The format depends on the mechanism used, it might be just `user', or `user@realm', or some- thing similar (SMTP AUTH only). ${auth_author} The authorization identity, i.e. the AUTH= param- eter of the SMTP MAIL command if supplied. ${auth_type} The mechanism used for SMTP authentication (only set if successful). ${auth_ssf} The keylength (in bits) of the symmetric encryp- tion algorithm used for the security layer of a SASL mechanism. ${bodytype} The message body type (7BIT or 8BITMIME), as determined from the envelope. Sendmail Installation and Operation Guide SMM:08-75 ${cert_issuer} The DN (distinguished name) of the CA (certifi- cate authority) that signed the presented certi- ficate (the cert issuer) (STARTTLS only). ${cert_md5} The MD5 hash of the presented certificate (STARTTLS only). ${cert_subject} The DN of the presented certificate (called the cert subject) (STARTTLS only). ${cipher} The cipher suite used for the connection, e.g., EDH-DSS-DES-CBC3-SHA, EDH-RSA-DES-CBC-SHA, DES- CBC-MD5, DES-CBC3-SHA (STARTTLS only). ${cipher_bits} The effective keylength (in bits) of the sym- metric encryption algorithm used for a TLS con- nection. ${client_addr} The IP address of the SMTP client. IPv6 addresses are tagged with "IPv6:" before the address. Defined in the SMTP server only. ${client_connections} The number of open connections in the SMTP server for the client IP address. ${client_flags} The flags specified by the Modifier= part of ClientPortOptions where flags are separated from each other by spaces and upper case flags are doubled. That is, Modifier=hA will be represented as "h AA" in ${client_flags}, which is required for testing the flags in rulesets. ${client_name} The host name of the SMTP client. This may be the client's bracketed IP address in the form [ nnn.nnn.nnn.nnn ] for IPv4 and [ IPv6:nnnn:...:nnnn ] for IPv6 if the client's IP address is not resolvable, or if it is resolvable but the IP address of the resolved hostname doesn't match the original IP address. Defined in the SMTP server only. See also ${client_resolve}. ${client_port} The port number of the SMTP client. Defined in the SMTP server only. SMM:08-76 Sendmail Installation and Operation Guide ${client_ptr} The result of the PTR lookup for the client IP address. Note: this is the same as ${client_name} if and only if ${client_resolve} is OK. Defined in the SMTP server only. ${client_rate} The number of incoming connections for the client IP address over the time interval specified by ConnectionRateWindowSize. ${client_resolve} Holds the result of the resolve call for ${client_name}. Possible values are: OK resolved successfully FAIL permanent lookup failure FORGED forward lookup doesn't match reverse lookup TEMP temporary lookup failure Defined in the SMTP server only. sendmail per- forms a hostname lookup on the IP address of the connecting client. Next the IP addresses of that hostname are looked up. If the client IP address does not appear in that list, then the hostname is maybe forged. This is reflected as the value FORGED for ${client_resolve} and it also shows up in $_ as "(may be forged)". ${cn_issuer} The CN (common name) of the CA that signed the presented certificate (STARTTLS only). Note: if the CN cannot be extracted properly it will be replaced by one of these strings based on the encountered error: BadCertificateContainsNULCN contains a NUL character BadCertificateTooLong CN is too long BadCertificateUnknown CN could not be extracted In the last case, some other (unspecific) error occurred. ${cn_subject} The CN (common name) of the presented certificate (STARTTLS only). See ${cn_issuer} for possible replacements. ${currHeader} Header value as quoted string (possibly truncated to MAXNAME). This macro is only available in header check rulesets. Sendmail Installation and Operation Guide SMM:08-77 ${daemon_addr} The IP address the daemon is listening on for connections. ${daemon_family} The network family if the daemon is accepting network connections. Possible values include "inet", "inet6", "iso", "ns", "x.25" ${daemon_flags} The flags for the daemon as specified by the Modifier= part of DaemonPortOptions whereby the flags are separated from each other by spaces, and upper case flags are doubled. That is, Modifier=Ea will be represented as "EE a" in ${daemon_flags}, which is required for testing the flags in rulesets. ${daemon_info} Some information about a daemon as a text string. For example, "SMTP+queueing@00:30:00". ${daemon_name} The name of the daemon from DaemonPortOptions Name= suboption. If this suboption is not set, "Daemon#", where # is the daemon number, is used. ${daemon_port} The port the daemon is accepting connection on. Unless DaemonPortOptions is set, this will most likely be "25". ${deliveryMode} The current delivery mode sendmail is using. It is initially set to the value of the DeliveryMode option. ${envid} The envelope id parameter (ENVID=) passed to sendmail as part of the envelope. ${hdrlen} The length of the header value which is stored in ${currHeader} (before possible truncation). If this value is greater than or equal to MAXNAME the header has been truncated. ${hdr_name} The name of the header field for which the current header check ruleset has been called. This is useful for a default header check ruleset to get the name of the header; the macro is only available in header check rulesets. SMM:08-78 Sendmail Installation and Operation Guide ${if_addr} The IP address of the interface of an incoming connection unless it is in the loopback net. IPv6 addresses are tagged with "IPv6:" before the address. ${if_addr_out} The IP address of the interface of an outgoing connection unless it is in the loopback net. IPv6 addresses are tagged with "IPv6:" before the address. ${if_family} The IP family of the interface of an incoming connection unless it is in the loopback net. ${if_family_out} The IP family of the interface of an outgoing connection unless it is in the loopback net. ${if_name} The hostname associated with the interface of an incoming connection. This macro can be used for SmtpGreetingMessage and HReceived for virtual hosting. For example: O SmtpGreetingMessage=$?{if_name}${if_name}$|$j$. MTA ${if_name_out} The name of the interface of an outgoing connec- tion. ${load_avg} The current load average. ${mail_addr} The address part of the resolved triple of the address given for the SMTP MAIL command. Defined in the SMTP server only. ${mail_host} The host from the resolved triple of the address given for the SMTP MAIL command. Defined in the SMTP server only. ${mail_mailer} The mailer from the resolved triple of the address given for the SMTP MAIL command. Defined in the SMTP server only. ${msg_id} The value of the Message-Id: header. Sendmail Installation and Operation Guide SMM:08-79 ${msg_size} The value of the SIZE= parameter, i.e., usually the size of the message (in an ESMTP dialogue), before the message has been collected, thereafter the message size as computed by sendmail (and can be used in check_compat). ${nbadrcpts} The number of bad recipients for a single mes- sage. ${nrcpts} The number of validated recipients for a single message. Note: since recipient validation happens after check_rcpt has been called, the value in this ruleset is one less than what might be expected. ${ntries} The number of delivery attempts. ${opMode} The current operation mode (from the -b flag). ${quarantine} The quarantine reason for the envelope, if it is quarantined. ${queue_interval} The queue run interval given by the -q flag. For example, -q30m would set ${queue_interval} to "00:30:00". ${rcpt_addr} The address part of the resolved triple of the address given for the SMTP RCPT command. Defined in the SMTP server only after a RCPT command. ${rcpt_host} The host from the resolved triple of the address given for the SMTP RCPT command. Defined in the SMTP server only after a RCPT command. ${rcpt_mailer} The mailer from the resolved triple of the address given for the SMTP RCPT command. Defined in the SMTP server only after a RCPT command. ${server_addr} The address of the server of the current outgoing SMTP connection. For LMTP delivery the macro is set to the name of the mailer. SMM:08-80 Sendmail Installation and Operation Guide ${server_name} The name of the server of the current outgoing SMTP or LMTP connection. ${time} The output of the time(3) function, i.e., the number of seconds since 0 hours, 0 minutes, 0 seconds, January 1, 1970, Coordinated Universal Time (UTC). ${tls_version} The TLS/SSL version used for the connection, e.g., TLSv1, SSLv3, SSLv2; defined after STARTTLS has been used. ${total_rate} The total number of incoming connections over the time interval specified by ConnectionRateWin- dowSize. ${verify} The result of the verification of the presented cert; only defined after STARTTLS has been used (or attempted). Possible values are: OK verification succeeded. NO no cert presented. NOT no cert requested. FAIL cert presented but could not be verified, e.g., the signing CA is missing. NONE STARTTLS has not been performed. TEMP temporary error occurred. PROTOCOL some protocol error occurred at the ESMTP level (not TLS). SOFTWARE STARTTLS handshake failed, which is a fatal error for this session, the e-mail will be queued. There are three types of dates that can be used. The $a and $b macros are in RFC 822 format; $a is the time as extracted from the "Date:" line of the message (if there was one), and $b is the current date and time (used for postmarks). If no "Date:" line is found in the incoming message, $a is set to the current time also. The $d macro is equivalent to the $b macro in UNIX (ctime) format. The macros $w, $j, and $m are set to the identity of this host. Sendmail tries to find the fully quali- fied name of the host if at all possible; it does this by calling gethostname(2) to get the current hostname and then passing that to gethostbyname(3) which is Sendmail Installation and Operation Guide SMM:08-81 supposed to return the canonical version of that host name.[17] Assuming this is successful, $j is set to the fully qualified name and $m is set to the domain part of the name (everything after the first dot). The $w macro is set to the first word (everything before the first dot) if you have a level 5 or higher confi- guration file; otherwise, it is set to the same value as $j. If the canonification is not successful, it is imperative that the config file set $j to the fully qualified domain name[18]. The $f macro is the id of the sender as origi- nally determined; when mailing to a specific host the $g macro is set to the address of the sender relative to the recipient. For example, if I send to "bollard@matisse.CS.Berkeley.EDU" from the machine "vangogh.CS.Berkeley.EDU" the $f macro will be "eric" and the $g macro will be "eric@vangogh.CS.Berkeley.EDU." The $x macro is set to the full name of the sender. This can be determined in several ways. It can be passed as flag to sendmail. It can be defined in the NAME environment variable. The third choice is the value of the "Full-Name:" line in the header if it exists, and the fourth choice is the comment field of a "From:" line. If all of these fail, and if the mes- sage is being originated locally, the full name is looked up in the /etc/passwd file. When sending, the $h, $u, and $z macros get set to the host, user, and home directory (if local) of the recipient. The first two are set from the $@ and $: part of the rewriting rules, respectively. The $p and $t macros are used to create unique strings (e.g., for the "Message-Id:" field). The $i macro is set to the queue id on this host; if put into the timestamp line it can be extremely useful for tracking messages. The $v macro is set to be the ver- sion number of sendmail; this is normally put in timestamps and has been proven extremely useful for debugging. ____________________ [17]For example, on some systems gethostname might return "foo" which would be mapped to "foo.bar.com" by gethost- byname. [18]Older versions of sendmail didn't pre-define $j at all, so up until 8.6, config files always had to define $j. SMM:08-82 Sendmail Installation and Operation Guide The $c field is set to the "hop count," i.e., the number of times this message has been processed. This can be determined by the -h flag on the command line or by counting the timestamps in the message. The $r and $s fields are set to the protocol used to communicate with sendmail and the sending hostname. They can be set together using the -p command line flag or separately using the -M or -oM flags. The $_ is set to a validated sender host name. If the sender is running an RFC 1413 compliant IDENT server and the receiver has the IDENT protocol turned on, it will include the user name on that host. The ${client_name}, ${client_addr}, and ${client_port} macros are set to the name, address, and port number of the SMTP client who is invoking sendmail as a server. These can be used in the check_* rulesets (using the $& deferred evaluation form, of course!). 5.3. C and F -- Define Classes Classes of phrases may be defined to match on the left hand side of rewriting rules, where a "phrase" is a sequence of characters that does not contain space characters. For example a class of all local names for this site might be created so that attempts to send to oneself can be eliminated. These can either be defined directly in the configuration file or read in from another file. Classes are named as a single letter or a word in {braces}. Class names beginning with lower case letters and special characters are reserved for system use. Classes defined in config files may be given names from the set of upper case letters for short names or beginning with an upper case letter for long names. The syntax is: Ccphrase1 phrase2... Fcfile Fc|program Fc[mapkey]@mapclass:mapspec The first form defines the class c to match any of the named words. If phrase1 or phrase2 is another class, e.g., $=S, the contents of class S are added to class c. It is permissible to split them among multiple lines; for example, the two forms: CHmonet ucbmonet Sendmail Installation and Operation Guide SMM:08-83 and CHmonet CHucbmonet are equivalent. The ``F'' forms read the elements of the class c from the named file, program, or map specification. Each element should be listed on a separate line. To specify an optional file, use ``-o'' between the class name and the file name, e.g., Fc -o /path/to/file If the file can't be used, sendmail will not complain but silently ignore it. The map form should be an optional map key, an at sign, and a map class followed by the specification for that map. Examples include: F{VirtHosts}@ldap:-k (&(objectClass=virtHosts)(host=*)) -v host F{MyClass}foo@hash:/etc/mail/classes will fill the class $={VirtHosts} from an LDAP map lookup and $={MyClass} from a hash database map lookup of the foo. There is also a built-in schema that can be accessed by only specifying: F{ClassName}@LDAP This will tell sendmail to use the default schema: -k (&(objectClass=sendmailMTAClass) (sendmailMTAClassName=ClassName) (|(sendmailMTACluster=${sendmailMTACluster}) (sendmailMTAHost=$j))) -v sendmailMTAClassValue Note that the lookup is only done when sendmail is initially started. Elements of classes can be accessed in rules using $= or $~. The $~ (match entries not in class) only matches a single word; multi-word entries in the class are ignored in this context. Some classes have internal meaning to sendmail: $=e contains the Content-Transfer-Encodings that can be 8->7 bit encoded. It is predefined to contain "7bit", "8bit", and "binary". $=k set to be the same as $k, that is, the UUCP node name. SMM:08-84 Sendmail Installation and Operation Guide $=m set to the set of domains by which this host is known, initially just $m. $=n can be set to the set of MIME body types that can never be eight to seven bit encoded. It defaults to "multipart/signed". Message types "message/*" and "multipart/*" are never encoded directly. Multipart messages are always handled recur- sively. The handling of message/* messages are controlled by class $=s. $=q A set of Content-Types that will never be encoded as base64 (if they have to be encoded, they will be encoded as quoted-printable). It can have pri- mary types (e.g., "text") or full types (such as "text/plain"). $=s contains the set of subtypes of message that can be treated recursively. By default it contains only "rfc822". Other "message/*" types cannot be 8->7 bit encoded. If a message containing eight bit data is sent to a seven bit host, and that message cannot be encoded into seven bits, it will be stripped to 7 bits. $=t set to the set of trusted users by the T confi- guration line. If you want to read trusted users from a file, use Ft/file/name. $=w set to be the set of all names this host is known by. This can be used to match local hostnames. $={persistentMacros} set to the macros that should be saved across queue runs. Care should be taken when adding macro names to this class. Sendmail can be compiled to allow a scanf(3) string on the F line. This lets you do simplistic parsing of text files. For example, to read all the user names in your system /etc/passwd file into a class, use FL/etc/passwd %[^:] which reads every line up to the first colon. 5.4. M -- Define Mailer Programs and interfaces to mailers are defined in this line. The format is: Mname, {field=value}* Sendmail Installation and Operation Guide SMM:08-85 where name is the name of the mailer (used internally only) and the "field=name" pairs define attributes of the mailer. Fields are: Path The pathname of the mailer Flags Special flags for this mailer Sender Rewriting set(s) for sender addresses Recipient Rewriting set(s) for recipient addresses recipientsMaximum number of recipients per connection Argv An argument vector to pass to this mailer Eol The end-of-line string for this mailer Maxsize The maximum message length to this mailer maxmessagesThe maximum message deliveries per connection Linelimit The maximum line length in the message body Directory The working directory for the mailer Userid The default user and group id to run as Nice The nice(2) increment for the mailer Charset The default character set for 8-bit characters Type Type information for DSN diagnostics Wait The maximum time to wait for the mailer QueuegroupThe default queue group for the mailer / The root directory for the mailer Only the first character of the field name is checked (it's case-sensitive). The following flags may be set in the mailer description. Any other flags may be used freely to conditionally assign headers to messages destined for particular mailers. Flags marked with - are not inter- preted by the sendmail binary; these are the conven- tionally used to correlate to the flags portion of the H line. Flags marked with = apply to the mailers for the sender address rather than the usual recipient mailers. a Run Extended SMTP (ESMTP) protocol (defined in RFCs 1869, 1652, and 1870). This flag defaults on if the SMTP greeting message includes the word "ESMTP". A Look up the user (address) part of the resolved mailer triple, in the alias database. Normally this is only set for local mailers. b Force a blank line on the end of a message. This is intended to work around some stupid versions of /bin/mail that require a blank line, but do not provide it themselves. It would not normally be used on network mail. B Strip leading backslashes (\) off of the address; this is a subset of the functionality of the s SMM:08-86 Sendmail Installation and Operation Guide flag. c Do not include comments in addresses. This should only be used if you have to work around a remote mailer that gets confused by comments. This strips addresses of the form "Phrase <address>" or "address (Comment)" down to just "address". C= If mail is received from a mailer with this flag set, any addresses in the header that do not have an at sign ("@") after being rewritten by ruleset three will have the "@domain" clause from the sender envelope address tacked on. This allows mail with headers of the form: From: usera@hosta To: userb@hostb, userc to be rewritten as: From: usera@hosta To: userb@hostb, userc@hosta automatically. However, it doesn't really work reliably. d Do not include angle brackets around route-address syntax addresses. This is useful on mailers that are going to pass addresses to a shell that might interpret angle brackets as I/O redirection. How- ever, it does not protect against other shell metacharacters. Therefore, passing addresses to a shell should not be considered secure. D- This mailer wants a "Date:" header line. e This mailer is expensive to connect to, so try to avoid connecting normally; any necessary connec- tion will occur during a queue run. See also option HoldExpensive. E Escape lines beginning with "From " in the message with a `>' sign. f The mailer wants a -f from flag, but only if this is a network forward operation (i.e., the mailer will give an error if the executing user does not have special permissions). F- This mailer wants a "From:" header line. g Normally, sendmail sends internally generated email (e.g., error messages) using the null return Sendmail Installation and Operation Guide SMM:08-87 address as required by RFC 1123. However, some mailers don't accept a null return address. If necessary, you can set the g flag to prevent send- mail from obeying the standards; error messages will be sent as from the MAILER-DAEMON (actually, the value of the $n macro). h Upper case should be preserved in host names (the $@ portion of the mailer triplet resolved from ruleset 0) for this mailer. i Do User Database rewriting on envelope sender address. I This mailer will be speaking SMTP to another send- mail -- as such it can use special protocol features. This flag should not be used except for debugging purposes because it uses VERB as SMTP command. j Do User Database rewriting on recipients as well as senders. k Normally when sendmail connects to a host via SMTP, it checks to make sure that this isn't accidently the same host name as might happen if sendmail is misconfigured or if a long-haul net- work interface is set in loopback mode. This flag disables the loopback check. It should only be used under very unusual circumstances. K Currently unimplemented. Reserved for chunking. l This mailer is local (i.e., final delivery will be performed). L Limit the line lengths as specified in RFC 821. This deprecated option should be replaced by the L= mail declaration. For historic reasons, the L flag also sets the 7 flag. m This mailer can send to multiple users on the same host in one transaction. When a $u macro occurs in the argv part of the mailer definition, that field will be repeated as necessary for all qualifying users. Removing this flag can defeat duplicate supression on a remote site as each recipient is sent in a separate transaction. M- This mailer wants a "Message-Id:" header line. n Do not insert a UNIX-style "From" line on the front of the message. SMM:08-88 Sendmail Installation and Operation Guide o Always run as the owner of the recipient mailbox. Normally sendmail runs as the sender for locally generated mail or as "daemon" (actually, the user specified in the u option) when delivering network mail. The normal behavior is required by most local mailers, which will not allow the envelope sender address to be set unless the mailer is run- ning as daemon. This flag is ignored if the S flag is set. p Use the route-addr style reverse-path in the SMTP "MAIL FROM:" command rather than just the return address; although this is required in RFC 821 sec- tion 3.1, many hosts do not process reverse-paths properly. Reverse-paths are officially discouraged by RFC 1123. P- This mailer wants a "Return-Path:" line. q When an address that resolves to this mailer is verified (SMTP VRFY command), generate 250 responses instead of 252 responses. This will imply that the address is local. r Same as f, but sends a -r flag. R Open SMTP connections from a "secure" port. Secure ports aren't (secure, that is) except on UNIX machines, so it is unclear that this adds any- thing. sendmail must be running as root to be able to use this flag. s Strip quote characters (" and \) off of the address before calling the mailer. S Don't reset the userid before calling the mailer. This would be used in a secure environment where sendmail ran as root. This could be used to avoid forged addresses. If the U= field is also speci- fied, this flag causes the effective user id to be set to that user. u Upper case should be preserved in user names for this mailer. Standards require preservation of case in the local part of addresses, except for those address for which your system accepts responsibility. RFC 2142 provides a long list of addresses which should be case insensitive. If you use this flag, you may be violating RFC 2142. Note that postmaster is always treated as a case insen- sitive address regardless of this flag. Sendmail Installation and Operation Guide SMM:08-89 U This mailer wants UUCP-style "From" lines with the ugly "remote from <host>" on the end. w The user must have a valid account on this machine, i.e., getpwnam must succeed. If not, the mail is bounced. See also the MailBoxDatabase option. This is required to get ".forward" capa- bility. W Ignore long term host status information (see Sec- tion "Persistent Host Status Information"). x- This mailer wants a "Full-Name:" header line. X This mailer wants to use the hidden dot algorithm as specified in RFC 821; basically, any line beginning with a dot will have an extra dot prepended (to be stripped at the other end). This insures that lines in the message containing a dot will not terminate the message prematurely. z Run Local Mail Transfer Protocol (LMTP) between sendmail and the local mailer. This is a variant on SMTP defined in RFC 2033 that is specifically designed for delivery to a local mailbox. Z Apply DialDelay (if set) to this mailer. 0 Don't look up MX records for hosts sent via SMTP/LMTP. Do not apply FallbackMXhost either. 1 Don't send null characters ('\0') to this mailer. 2 Don't use ESMTP even if offered; this is useful for broken systems that offer ESMTP but fail on EHLO (without recovering when HELO is tried next). 3 Extend the list of characters converted to =XX notation when converting to Quoted-Printable to include those that don't map cleanly between ASCII and EBCDIC. Useful if you have IBM mainframes on site. 5 If no aliases are found for this address, pass the address through ruleset 5 for possible alternate resolution. This is intended to forward the mail to an alternate delivery spot. 6 Strip headers to seven bits. 7 Strip all output to seven bits. This is the default if the L flag is set. Note that clearing this option is not sufficient to get full eight SMM:08-90 Sendmail Installation and Operation Guide bit data passed through sendmail. If the 7 option is set, this is essentially always set, since the eighth bit was stripped on input. Note that this option will only impact messages that didn't have 8->7 bit MIME conversions performed. 8 If set, it is acceptable to send eight bit data to this mailer; the usual attempt to do 8->7 bit MIME conversions will be bypassed. 9 If set, do limited 7->8 bit MIME conversions. These conversions are limited to text/plain data. : Check addresses to see if they begin ":include:"; if they do, convert them to the "*include*" mailer. | Check addresses to see if they begin with a `|'; if they do, convert them to the "prog" mailer. / Check addresses to see if they begin with a `/'; if they do, convert them to the "*file*" mailer. @ Look up addresses in the user database. % Do not attempt delivery on initial receipt of a message or on queue runs unless the queued message is selected using one of the -qI/-qR/-qS queue run modifiers or an ETRN request. ! Disable an MH hack that drops an explicit From: header if it is the same as what sendmail would generate. Configuration files prior to level 6 assume the `A', `w', `5', `:', `|', `/', and `@' options on the mailer named "local". The mailer with the special name "error" can be used to generate a user error. The (optional) host field is an exit status to be returned, and the user field is a message to be printed. The exit status may be numeric or one of the values USAGE, NOUSER, NOHOST, UNAVAILABLE, SOFTWARE, TEMPFAIL, PROTOCOL, or CONFIG to return the corresponding EX_ exit code, or an enhanced error code as described in RFC 1893, Enhanced Mail System Status Codes. For example, the entry: $#error $@ NOHOST $: Host unknown in this domain on the RHS of a rule will cause the specified error to be generated and the "Host unknown" exit status to be returned if the LHS matches. This mailer is only Sendmail Installation and Operation Guide SMM:08-91 functional in rulesets 0, 5, or one of the check_* rulesets. The host field can also contain the special token quarantine which instructs sendmail to quaran- tine the current message. The mailer with the special name "discard" causes any mail sent to it to be discarded but otherwise treated as though it were successfully delivered. This mailer cannot be used in ruleset 0, only in the vari- ous address checking rulesets. The mailer named "local" must be defined in every configuration file. This is used to deliver local mail, and is treated specially in several ways. Addi- tionally, three other mailers named "prog", "*file*", and "*include*" may be defined to tune the delivery of messages to programs, files, and :include: lists respectively. They default to: Mprog, P=/bin/sh, F=lsoDq9, T=DNS/RFC822/X-Unix, A=sh -c $u M*file*, P=[FILE], F=lsDFMPEouq9, T=DNS/RFC822/X-Unix, A=FILE $u M*include*, P=/dev/null, F=su, A=INCLUDE $u Builtin pathnames are [FILE] and [IPC], the former is used for delivery to files, the latter for delivery via interprocess communication. For mailers that use [IPC] as pathname the argument vector (A=) must start with TCP or FILE for delivery via a TCP or a Unix domain socket. If TCP is used, the second argu- ment must be the name of the host to contact. Option- ally a third argument can be used to specify a port, the default is smtp (port 25). If FILE is used, the second argument must be the name of the Unix domain socket. If the argument vector does not contain $u then sendmail will speak SMTP (or LMTP if the mailer flag z is specified) to the mailer. If no Eol field is defined, then the default is "\r\n" for SMTP mailers and "\n" of others. The Sender and Recipient rewriting sets may either be a simple ruleset id or may be two ids separated by a slash; if so, the first rewriting set is applied to envelope addresses and the second is applied to headers. Setting any value to zero disables corresponding mailer-specific rewriting. The Directory is actually a colon-separated path of directories to try. For example, the definition "D=$z:/" first tries to execute in the recipient's SMM:08-92 Sendmail Installation and Operation Guide home directory; if that is not available, it tries to execute in the root of the filesystem. This is intended to be used only on the "prog" mailer, since some shells (such as csh) refuse to execute if they cannot read the current directory. Since the queue directory is not normally readable by unprivileged users csh scripts as recipients can fail. The Userid specifies the default user and group id to run as, overriding the DefaultUser option (q.v.). If the S mailer flag is also specified, this user and group will be set as the effective uid and gid for the process. This may be given as user:group to set both the user and group id; either may be an integer or a symbolic name to be looked up in the passwd and group files respectively. If only a sym- bolic user name is specified, the group id in the passwd file for that user is used as the group id. The Charset field is used when converting a mes- sage to MIME; this is the character set used in the Content-Type: header. If this is not set, the DefaultCharset option is used, and if that is not set, the value "unknown-8bit" is used. WARNING: this field applies to the sender's mailer, not the recipient's mailer. For example, if the envelope sender address lists an address on the local network and the reci- pient is on an external network, the character set will be set from the Charset= field for the local net- work mailer, not that of the external network mailer. The Type= field sets the type information used in MIME error messages as defined by RFC 1894. It is actually three values separated by slashes: the MTA- type (that is, the description of how hosts are named), the address type (the description of e-mail addresses), and the diagnostic type (the description of error diagnostic codes). Each of these must be a registered value or begin with "X-". The default is "dns/rfc822/smtp". The m= field specifies the maximum number of mes- sages to attempt to deliver on a single SMTP or LMTP connection. The default is infinite. The r= field specifies the maximum number of recipients to attempt to deliver in a single envelope. It defaults to 100. The /= field specifies a new root directory for the mailer. The path is macro expanded and then passed to the "chroot" system call. The root direc- tory is changed before the Directory field is Sendmail Installation and Operation Guide SMM:08-93 consulted or the uid is changed. The Wait= field specifies the maximum time to wait for the mailer to return after sending all data to it. This applies to mailers that have been forked by sendmail. The Queuegroup= field specifies the default queue group in which received mail should be queued. This can be overridden by other means as explained in sec- tion ``Queue Groups and Queue Directories''. 5.5. H -- Define Header The format of the header lines that sendmail inserts into the message are defined by the H line. The syntax of this line is one of the following: Hhname: htemplate H[?mflags?]hname: htemplate H[?${macro}?]hname: htemplate Continuation lines in this spec are reflected directly into the outgoing message. The htemplate is macro- expanded before insertion into the message. If the mflags (surrounded by question marks) are specified, at least one of the specified flags must be stated in the mailer definition for this header to be automati- cally output. If a ${macro} (surrounded by question marks) is specified, the header will be automatically output if the macro is set. The macro may be set using any of the normal methods, including using the macro storage map in a ruleset. If one of these headers is in the input it is reflected to the output regardless of these flags or macros. Notice: If a ${macro} is used to set a header, then it is useful to add that macro to class $={persistentMacros} which consists of the macros that should be saved across queue runs. Some headers have special semantics that will be described later. A secondary syntax allows validation of headers as they are being read. To enable validation, use: HHeader: $>Ruleset HHeader: $>+Ruleset The indicated Ruleset is called for the specified SMM:08-94 Sendmail Installation and Operation Guide Header, and can return $#error to reject or quarantine the message or $#discard to discard the message (as with the other check_* rulesets). The ruleset receives the header field-body as argument, i.e., not the header field-name; see also ${hdr_name} and ${curr- Header}. The header is treated as a structured field, that is, text in parentheses is deleted before pro- cessing, unless the second form $>+ is used. Note: only one ruleset can be associated with a header; sendmail will silently ignore multiple entries. For example, the configuration lines: HMessage-Id: $>CheckMessageId SCheckMessageId R< $+ @ $+ >$@ OK R$* $#error $: Illegal Message-Id header would refuse any message that had a Message-Id: header of any of the following forms: Message-Id: <> Message-Id: some text Message-Id: <legal text@domain> extra crud A default ruleset that is called for headers which don't have a specific ruleset defined for them can be specified by: H*: $>Ruleset or H*: $>+Ruleset 5.6. O -- Set Option There are a number of global options that can be set from a configuration file. Options are represented by full words; some are also representable as single characters for back compatibility. The syntax of this line is: O option=value This sets option option to be value. Note that there must be a space between the letter `O' and the name of the option. An older version is: Oovalue Sendmail Installation and Operation Guide SMM:08-95 where the option o is a single character. Depending on the option, value may be a string, an integer, a boolean (with legal values "t", "T", "f", or "F"; the default is TRUE), or a time interval. All filenames used in options should be absolute paths, i.e., starting with '/'. Relative filenames most likely cause surprises during operation (unless otherwise noted). The options supported (with the old, one charac- ter names in brackets) are: AliasFile=spec, spec, ... [A] Specify possible alias file(s). Each spec should be in the format ``class: info'' where class: is optional and defaults to ``implicit''. Note that info is required for all classes except "ldap". For the "ldap" class, if info is not specified, a default info value is used as follows: -k (&(objectClass=sendmailMTAAliasObject) (sendmailMTAAliasName=aliases) (|(sendmailMTACluster=${sendmailMTACluster}) (sendmailMTAHost=$j)) (sendmailMTAKey=%0)) -v sendmailMTAAliasValue Depending on how sendmail is compiled, valid classes are "implicit" (search through a compiled-in list of alias file types, for back compatibility), "hash" (if NEWDB is specified), "btree" (if NEWDB is specified), "dbm" (if NDBM is specified), "stab" (inter- nal symbol table -- not normally used unless you have no other database lookup), "sequence" (use a sequence of maps previ- ously declared), "ldap" (if LDAPMAP is specified), or "nis" (if NIS is specified). If a list of specs are provided, sendmail searches them in order. AliasWait=timeout [a] If set, wait up to timeout (units default to minutes) for an "@:@" entry to exist in the alias database before starting up. If it does not appear in the timeout interval issue a warning. AllowBogusHELO [no short name] If set, allow HELO SMTP com- mands that don't include a host name. SMM:08-96 Sendmail Installation and Operation Guide Setting this violates RFC 1123 section 5.2.5, but is necessary to interoperate with several SMTP clients. If there is a value, it is still checked for legitimacy. AuthMaxBits=N [no short name] Limit the maximum encryption strength for the security layer in SMTP AUTH (SASL). Default is essentially unlimited. This allows to turn off additional encryp- tion in SASL if STARTTLS is already encrypt- ing the communication, because the existing encryption strength is taken into account when choosing an algorithm for the security layer. For example, if STARTTLS is used and the symmetric cipher is 3DES, then the the keylength (in bits) is 168. Hence setting AuthMaxBits to 168 will disable any encryp- tion in SASL. AuthMechanisms [no short name] List of authentication mechanisms for AUTH (separated by spaces). The advertised list of authentication mechanisms will be the intersection of this list and the list of available mechanisms as determined by the Cyrus SASL library. If STARTTLS is active, EXTERNAL will be added to this list. In that case, the value of {cert_subject} is used as authentication id. AuthOptions [no short name] List of options for SMTP AUTH consisting of single characters with intervening white space or commas. Sendmail Installation and Operation Guide SMM:08-97). m require mechanisms which provide mutual authentication (only available if using Cyrus SASL v2 or later).. AuthRealm [no short name] The authentication realm that is passed to the Cyrus SASL library. If no realm is specified, $j is used. BadRcptThrottle=N [no short name] If set and the specified number of recipients in a single SMTP tran- saction have been rejected, sleep for one second after each subsequent RCPT command in that transaction. BlankSub=c [B] Set the blank substitution character to c. Unquoted spaces in addresses are replaced by this character. Defaults to space (i.e., no change is made). CACertPath [no short name] Path to directory with SMM:08-98 Sendmail Installation and Operation Guide certificates of CAs. This directory direc- tory must contain the hashes of each CA cer- tificate as filenames (or as links to them). CACertFile [no short name] File containing one or more CA certificates; see section about STARTTLS for more information. CheckAliases [n] Validate the RHS of aliases when rebuilding the alias database. CheckpointInterval=N [C] Checkpoints the queue every N (default 10) addresses sent. If your system crashes during delivery to a large list, this prevents retransmission to any but the last N recipients. ClassFactor=fact [z] The indicated factor is multiplied by the message class (determined by the Pre- cedence: field in the user header and the P lines in the configuration file) and sub- tracted from the priority. Thus, messages with a higher Priority: will be favored. Defaults to 1800. ClientCertFile [no short name] File containing the certifi- cate of the client, i.e., this certificate is used when sendmail acts as client (for STARTTLS). ClientKeyFile [no short name] File containing the private key belonging to the client certificate (for STARTTLS if sendmail runs as client). ClientPortOptions=options [O] Set client SMTP options. The options are key=value pairs separated by commas. Known keys are: Port Name/number of source port for connection (defaults to any free port) Addr Address mask (defaults INADDR_ANY) Family Address family (defaults to INET) SndBufSizeSize of TCP send buffer RcvBufSizeSize of TCP receive buffer Modifier Options (flags) for the client The Address mask may be a numeric address in Sendmail Installation and Operation Guide SMM:08-99 IPv4 dot notation or IPv6 colon notation or a network name. Note that if a network name is specified, only the first IP address returned for it will be used. This may cause indeterminate behavior for network names that resolve to multiple addresses. There- fore, use of an address is recommended. Modifier can be the following character: h use name of interface for HELO command A don't use AUTH when sending e-mail S don't use STARTTLS when sending e-mail If ``h'' is set, the name corresponding to the outgoing interface address (whether chosen via the Connection parameter or the default) is used for the HELO/EHLO command. However, the name must not start with a square bracket and it must contain at least one dot. This is a simple test whether the name is not an IP address (in square brack- ets) but a qualified hostname. Note that multiple ClientPortOptions settings are allowed in order to give settings for each protocol family (e.g., one for Family=inet and one for Family=inet6). A restriction placed on one family only affects outgoing connections on that particular family. ColonOkInAddr [no short name] If set, colons are accept- able in e-mail addresses (e.g., "host:user"). If not set, colons indicate the beginning of a RFC 822 group construct ("groupname: member1, member2, ... mem- berN;"). Doubled colons are always accept- able ("nodename::user") and proper route- addr nesting is understood ("<@relay:user@host>"). Furthermore, this option defaults on if the configuration ver- sion level is less than 6 (for back compati- bility). However, it must be off for full compatibility with RFC 822. ConnectionCacheSize=N [k] The maximum number of open connections that will be cached at a time. The default is one. This delays closing the current con- nection until either this invocation of sendmail needs to connect to another host or it terminates. Setting it to zero defaults to the old behavior, that is, connections are closed immediately. Since this consumes SMM:08-100 Sendmail Installation and Operation Guide file descriptors, the connection cache should be kept small: 4 is probably a prac- tical maximum. ConnectionCacheTimeout=timeout [K] The maximum amount of time a cached con- nection will be permitted to idle without activity. If this time is exceeded, the con- nection is immediately closed. This value should be small (on the order of ten minutes). Before sendmail uses a cached con- nection, it always sends a RSET command to check the connection; if this fails, it reo- pens the connection. This keeps your end from failing if the other end times out. The point of this option is to be a good network neighbor and avoid using up excessive resources on the other end. The default is five minutes. ConnectOnlyTo=address [no short name] This can be used to override the connection address (for testing pur- poses). ConnectionRateThrottle=N [no short name] If set to a positive value, allow no more than N incoming connections in a one second period per daemon. This is intended to flatten out peaks and allow the load average checking to cut in. Defaults to zero (no limits). ConnectionRateWindowSize=N [no short name] Define the length of the interval for which the number of incoming connections is maintained. The default is 60 seconds. ControlSocketName=name [no short name] Name of the control socket for daemon management. A running sendmail daemon can be controlled through this named socket. Available commands are: help, mstat, restart, shutdown, and status. The status command returns the current number of daemon children, the maximum number of daemon chil- dren, the free disk space (in blocks) of the queue directory, and the load average of the machine expressed as an integer. If not set, no control socket will be available. Solaris and pre-4.4BSD kernel users should see the note in sendmail/README . Sendmail Installation and Operation Guide SMM:08-101 CRLFile=name [no short name] Name of file that contains certificate revocation status, useful for X.509v3 authentication. CRL checking requires at least OpenSSL version 0.9.7. Note: if a CRLFile is specified but the file is unusable, STARTTLS is disabled. DHParameters Possible values are: 5 use precomputed 512 bit prime 1 generate 1024 bit prime 2 generate 2048 bit prime none do not use Diffie-Hellman NAME load prime from file This is only required if a ciphersuite con- taining DSA/DH is used. If ``5'' is selected, then precomputed, fixed primes are used. This is the default for the client side. If ``1'' or ``2'' is selected, then prime values are computed during startup. The server side default is ``1''. Note: this operation can take a significant amount of time on a slow machine (several seconds), but it is only done once at startup. If ``none'' is selected, then TLS ciphersuites containing DSA/DH cannot be used. If a file name is specified (which must be an absolute path), then the primes are read from it. DaemonPortOptions=options [O] Set server SMTP options. Each instance of DaemonPortOptions leads to an additional incoming socket. The options are key=value pairs. Known keys are: Name User-definable name for the daemon (defaults to "Daemon#") Port Name/number of listening port (defaults to "smtp") Addr Address mask (defaults INADDR_ANY) Family Address family (defaults to INET) InputMailFiltersList of input mail filters for the daemon Listen Size of listen queue (defaults to 10) Modifier Options (flags) for the daemon SndBufSizeSize of TCP send buffer RcvBufSizeSize of TCP receive buffer children maximum number of children per daemon, see MaxDaemonChildren. DeliveryModeDelivery mode per daemon, see DeliveryMode. refuseLA RefuseLA per daemon delayLA DelayLA per daemon queueLA QueueLA per daemon SMM:08-102 Sendmail Installation and Operation Guide The Name key is used for error messages and logging. The Address mask may be a numeric address in IPv4 dot notation or IPv6 colon notation or a network name. Note that if a network name is specified, only the first IP address returned for it will be used. This may cause indeterminate behavior for network names that resolve to multiple addresses. Therefore, use of an address is recommended. The Family key defaults to INET (IPv4). IPv6 users who wish to also accept IPv6 connec- tions should add additional Family=inet6 DaemonPortOptions lines. The InputMail- Filters key overrides the default list of input mail filters listed in the InputMail- Filters option. If multiple input mail filters are required, they must be separated by semicolons (not commas). Modifier can be a sequence (without any delimiters) of the following characters: a always require authentication b bind to interface through which mail has been received c perform hostname canonification (.cf) f require fully qualified hostname (.cf) s Run smtps (SMTP over SSL) instead of smtp u allow unqualified addresses (.cf) A disable AUTH (overrides 'a' modifier) C don't perform hostname canonification E disallow ETRN (see RFC 2476) O optional; if opening the socket fails ignore it S don't offer STARTTLS That is, one way to specify a message sub- mission agent (MSA) that always requires authentication is: O DaemonPortOptions=Name=MSA, Port=587, M=Ea modifier ``f'' disallows addresses of the form Sendmail Installation and Operation Guide SMM:08-103confi- guration. DefaultAuthInfo [no short name] Filename that contains default authentication information for out- going connections. This file must contain the user id, the authorization id, the pass- word (plain text), the realm and the list of mechanisms to use on separate lines and must be readable by root (or the trusted user) only. If no realm is specified, $j is used. If no mechanisms are specified, the list given by AuthMechanisms is used. Notice: this option is deprecated and will be removed in future versions. Moreover, it doesn't work for the MSP since it can't read the file (the file must not be group/world- readable otherwise sendmail will complain). Use the authinfo ruleset instead which pro- vides more control over the usage of the data anyway. DefaultCharSet=charset [no short name] When a message that has 8- bit characters but is not in MIME format is converted to MIME (see the EightBitMode option) a character set must be included in the Content-Type: header. This character set is normally set from the Charset= field of the mailer descriptor. If that is not set, the value of this option is used. If this option is not set, the value "unknown-8bit" is used. SMM:08-104 Sendmail Installation and Operation Guide DataFileBufferSize=threshold [no short name] Set the threshold, in bytes, before a memory-based queue data file becomes disk-based. The default is 4096 bytes. DeadLetterDrop=file [no short name] Defines the location of the system-wide dead.letter file, formerly hard- coded to /usr/tmp/dead.letter. If this option is not set (the default), sendmail will not attempt to save to a system-wide dead.letter file in the event it cannot bounce the mail to the user or postmaster. Instead, it will rename the qf file as it has in the past when the dead.letter file could not be opened. DefaultUser=user:group [u] Set the default userid for mailers to user:group. If group is omitted and user is a user name (as opposed to a numeric user id) the default group listed in the /etc/passwd file for that user is used as the default group. Both user and group may be numeric. Mailers without the S flag in the mailer definition will run as this user. Defaults to 1:1. The value can also be given as a symbolic user name.[19] DelayLA=LA [no short name] When the system load average exceeds LA, sendmail will sleep for one second on most SMTP commands and before accepting connections. DeliverByMin=time [0] Set minimum time for Deliver By SMTP Service Extension (RFC 2852). If 0, no time is listed, if less than 0, the extension is not offered, if greater than 0, it is listed as minimum time for the EHLO keyword DELIVERBY. DeliveryMode=x [d] Deliver in mode x. Legal modes are: ____________________ [19]The old g option has been combined into the De- faultUser option. Sendmail Installation and Operation Guide SMM:08-105 i Deliver interactively (synchronously) b Deliver in background (asynchronously) q Just queue the message (deliver during queue run) d Defer delivery and all map lookups (deliver during queue run) Defaults to ``b'' if no option is specified, ``i'' if it is specified but given no argu- ment (i.e., ``Od'' is equivalent to ``Odi''). The -v command line flag sets this to i. Note: for internal reasons, ``i'' does not work if a milter is enabled which can reject or delete recipients. In that case the mode will be changed to ``b''. DialDelay=sleeptime [no short name] Dial-on-demand network con- nections can see timeouts if a connection is opened before the call is set up. If this is set to an interval and a connection times out on the first connection being attempted sendmail will sleep for this amount of time and try again. This should give your system time to establish the connection to your service provider. Units default to seconds, so "DialDelay=5" uses a five second delay. Defaults to zero (no retry). This delay only applies to mailers which have the Z flag set. DirectSubmissionModifiers=modifiers Defines ${daemon_flags} for direct (command line) submissions. If not set, ${daemon_flags} is either "CC f" if the option -G is used or "c u" otherwise. Note that only the the "CC", "c", "f", and "u" flags are checked. DontBlameSendmail=option,option,... [no short name] In order to avoid possible cracking attempts caused by world- and group-writable files and directories, send- mail does paranoid checking when opening most of its support files. If for some rea- son you absolutely must run with, for exam- ple, a group-writable /etc directory, then you will have to turn off this checking (at the cost of making your system more vulner- able to attack). The possible arguments have been described earlier. The details of these flags are described above. Use of this option is not recommended. SMM:08-106 Sendmail Installation and Operation Guide DontExpandCnames [no short name] The standards say that all host addresses used in a mail message must be fully canonical. For example, if your host is named "Cruft.Foo.ORG" and also has an alias of "", the former name must be used at all times. This is enforced during host name canonification ($[ ... $] lookups). If this option is set, the proto- cols are ignored and the "wrong" thing is done. However, the IETF is moving toward changing this standard, so the behavior may become acceptable. Please note that hosts downstream may still rewrite the address to be the true canonical name however. DontInitGroups [no short name] If set, sendmail will avoid using the initgroups(3) call. If you are running NIS, this causes a sequential scan of the groups.byname map, which can cause your NIS server to be badly overloaded in a large domain. The cost of this is that the only group found for users will be their primary group (the one in the password file), which will make file access permis- sions somewhat more restrictive. Has no effect on systems that don't have group lists. DontProbeInterfaces [no short name] Sendmail normally finds the names of all interfaces active on your machine when it starts up and adds their name to the $=w class of known host aliases. If you have a large number of virtual inter- faces or if your DNS inverse lookups are slow this can be time consuming. This option turns off that probing. However, you will need to be certain to include all variant names in the $=w class by some other mechan- ism. If set to loopback, loopback interfaces (e.g., lo0) will not be probed. DontPruneRoutes [R] Normally, sendmail tries to eliminate any unnecessary explicit routes when sending an error message (as discussed in RFC 1123 S 5.2.6). For example, when sending an error message to <@known1,@known2,@known3:user@unknown> Sendmail Installation and Operation Guide SMM:08-107 sendmail will strip off the "@known1,@known2" in order to make the route as direct as possible. However, if the R option is set, this will be disabled, and the mail will be sent to the first address in the route, even if later addresses are known. This may be useful if you are caught behind a firewall. DoubleBounceAddress=error-address [no short name] If an error occurs when sending an error message, send the error report (termed a "double bounce" because it is an error "bounce" that occurs when trying to send another error "bounce") to the indi- cated address. The address is macro expanded at the time of delivery. If not set, defaults to "postmaster". If set to an empty string, double bounces are dropped. EightBitMode=action [8] Set handling of eight-bit data. There are two kinds of eight-bit data: that declared as such using the BODY=8BITMIME ESMTP declaration or the -B8BITMIME command line flag, and undeclared 8-bit data, that is, input that just happens to be eight bits. There are three basic operations that can happen: undeclared 8-bit data can be automatically converted to 8BITMIME, unde- clared 8-bit data can be passed as-is without conversion to MIME (``just send 8''), and declared 8-bit data can be con- verted to 7-bits for transmission to a non- 8BITMIME mailer. The possible actions are: s Reject undeclared 8-bit data (``strict'') m Convert undeclared 8-bit data to MIME (``mime'') p Pass undeclared 8-bit data (``pass'') In all cases properly declared 8BITMIME data will be converted to 7BIT as needed. ErrorHeader=file-or-message [E] Prepend error messages with the indi- cated message. If it begins with a slash, it is assumed to be the pathname of a file con- taining a message (this is the recommended setting). Otherwise, it is a literal mes- sage. The error file might contain the name, email address, and/or phone number of a local postmaster who could provide assis- tance to end users. If the option is missing SMM:08-108 Sendmail Installation and Operation Guide or null, or if it names a file which does not exist or which is not readable, no mes- sage is printed. ErrorMode=x [e] Dispose of errors using mode x. The values for x are: p Print error messages (default) q No messages, just give exit status m Mail back errors w Write back errors (mail if user not logged in) e Mail back errors (when applicable) and give zero exit stat always Note that the last mode, "e", is for Berknet error processing and should not be used in normal circumstances. Note, too, that mode "q", only applies to errors recognized before sendmail forks for background delivery. FallbackMXhost=fallbackhost [V] If specified, the fallbackhost acts like a very low priority MX on every host. MX records will be looked up for this host, unless the name is surrounded by square brackets. This is intended to be used by sites with poor network connectivity. Mes- sages which are undeliverable due to tem- porary address failures (e.g., DNS failure) also go to the FallbackMXhost. FallBackSmartHost=hostname. FastSplit [no short name] If set to a value greater than zero (the default is one), it suppresses the MX lookups on addresses when they are initially sorted, i Sendmail Installation and Operation Guide SMM:08-109 queued up and must be taken care of by a queue run. Since the default submission method is via SMTP (either from a MUA or via the MSP), the value of FastSplit is seldom used to limit the number of processes to deliver the envelopes. ForkEachJob [Y] If set, deliver each job that is run from the queue in a separate process. ForwardPath=path [J] Set the path for searching for users' .forward files. The default is "$z/.forward". Some sites that use the auto- m" will search first in /var/forward/username and then in ~username/.forward (but only if the first file does not exist). HeloName=name [no short name] Set the name to be used for HELO/EHLO (instead of $j). HoldExpensive [c] If an outgoing mailer is marked as being expensive, don't connect immediately. HostsFile=path [no short name] The path to the hosts data- base, normally "/etc/hosts". This option is only consulted when sendmail is canonifying addresses, and then only when "files" is in the "hosts" service switch entry. In partic- ular, this file is never used when looking up host addresses; that is under the control of the system gethostbyname(3) routine. HostStatusDirectory=path [no short name] The location of the long term host status information. When set, information about the status of hosts (e.g., host down or not accepting connections) will be shared between all sendmail processes; normally, this information is only held within a single queue run. This option SMM:08-110 Sendmail Installation and Operation Guide requires a connection cache of at least 1 to function. If the option begins with a lead- ing `/', it is an absolute pathname; other- wise, it is relative to the mail queue directory. A suggested value for sites desiring persistent host status is ".hoststat" (i.e., a subdirectory of the queue directory). IgnoreDots [i] Ignore dots in incoming messages. This is always disabled (that is, dots are always accepted) when reading SMTP mail. InputMailFilters=name,name,... A comma separated list of filters which determines which filters (see the "X -- Mail Filter (Milter) Definitions" section) and the invocation sequence are contacted for incoming SMTP messages. If none are set, no filters will be contacted. LDAPDefaultSpec=spec [no short name] Sets a default map specifi- cation for LDAP maps. The value should only contain LDAP specific settings such as "-h host -p port -d bindDN". The settings will be used for all LDAP maps unless the indivi- dual map specification overrides a setting. This option should be set before any LDAP maps are defined. LogLevel=n [L] Set the log level to n. Defaults to 9. Mxvalue [no long version] Set the macro x to value. This is intended only for use from the com- mand line. The -M flag is preferred. MailboxDatabase [no short name] Type of lookup to find information about local mailboxes, defaults to ``pw'' which uses getpwnam. Other types can be introduced by adding them to the source code, see libsm/mbdb.c for details. UseMSP [no short name] Use as mail submission pro- gram, i.e., allow group writable queue files if the group is the same as that of a set- group-ID sendmail binary. See the file sendmail/SECURITY in the distribution tar- ball. Sendmail Installation and Operation Guide SMM:08-111 MatchGECOS [G] Allow fuzzy matching on the GECOS field. If this flag is set, and the usual user name lookups fail (that is, there is no alias with this name and a getpwnam fails), sequentially search the password file for a matching entry in the GECOS field. This also requires that MATCHGECOS be turned on during compilation. This option is not recommended. MaxAliasRecursion=N [no short name] The maximum depth of alias recursion (default: 10). MaxDaemonChildren=N [no short name] If set, sendmail will refuse connections when it has more than N children processing incoming mail or automatic queue runs. This does not limit the number of out- going connections. If the default Deliver- yMode (background) is used, then sendmail may create an almost unlimited number of children (depending on the number of tran- sactions and the relative execution times of mail receiption and mail delivery). If the limit should be enforced, then a Deliver- yMode other than background must be used. If not set, there is no limit to the number of children -- that is, the system load average controls this. MaxHeadersLength=N [no short name] The maximum length of the sum of all headers. This can be used to prevent a denial of service attack. The default is no limit. MaxHopCount=N [h] The maximum hop count. Messages that have been processed more than N times are assumed to be in a loop and are rejected. Defaults to 25. MaxMessageSize=N [no short name] Specify the maximum message size to be advertised in the ESMTP EHLO response. Messages larger than this will be rejected. If set to a value greater than zero, that value will be listed in the SIZE response, otherwise SIZE is advertised in the ESMTP EHLO response without a parameter. SMM:08-112 Sendmail Installation and Operation Guide MaxMimeHeaderLength=N[/M] [no short name] Sets the maximum length of certain MIME header field values to N char- acters. These MIME header fields are deter- mined by being a member of class {check- MIMETextHeaders}, which currently contains only the header Content-Description. For some of these headers which take parameters, the maximum length of each parameter is set to M if specified. If /M is not specified, one half of N will be used. By default, these values are 2048 and 1024, respec- tively. To allow any length, a value of 0 can be specified. MaxNOOPCommands=N Override the default of MAXNOOPCOMMANDS for the number of useless commands, see Section "Measures against Denial of Service Attacks". MaxQueueChildren=N [no short name] will not all run concurrently. That is, some por- tion of the queue groups will run con- currently such that MaxQueueChildren will not be exceeded, while the remaining queue groups will be run later (in round robin order). See also MaxRunnersPerQueue and the section Queue Group Declaration. Notice: sendmail does not count individual queue runners, but only sets of processes that act on a workgroup. Hence the actual number of queue runners may be lower than the limit imposed by MaxQueueChildren. This discrepancy can be large if some queue runners have to wait for a slow server and if short intervals are used. MaxQueueRunSize=N [no short name] The maximum number of jobs that will be processed in a single queue run. If not set, there is no limit on the size. If you have very large queues or a very short queue run interval this could be unstable. However, since the first N jobs in Sendmail Installation and Operation Guide SMM:08-113 queue directory order are run (rather than the N highest priority jobs) this should be set as high as possible to avoid "losing" jobs that happen to fall late in the queue directory. Note: this option also restricts the number of entries printed by mailq. That is, if MaxQueueRunSize is set to a value N larger than zero, then only N entries are printed per queue group. MaxRecipientsPerMessage=N [no short name] The maximum number of reci- pients that will be accepted per message in an SMTP transaction. Note: setting this too low can interfere with sending mail from MUAs that use SMTP for initial submission. If not set, there is no limit on the number of recipients per envelope. MaxRunnersPerQueue=N [no short name] This sets the default max- imum number of queue runners for queue groups. Up to N queue runners will work in parallel on a queue group's messages. This is useful where the processing of a message in the queue might delay the processing of subsequent messages. Such a delay may be the result of non-erroneous situations such as a low bandwidth connection. May be overridden on a per queue group basis by setting the Runners option; see the section Queue Group Declaration. The default is 1 when not set. MeToo [m] Send to me too, even if I am in an alias expansion. This option is deprecated and will be removed from a future version. Milter [no short name] This option has several sub(sub)options. The names of the suboptions are separated by dots. At the first level the following options are available: LogLevel Log level for input mail filter actions, defaults to LogLevel. macros Specifies list of macro to transmit to filters. See list below. The ``macros'' option has the following suboptions which specify the list of macro to transmit to milters after a certain event occurred. SMM:08-114 Sendmail Installation and Operation Guide connect After session connection start helo After EHLO/HELO command envfrom After MAIL From command envrcpt After RCPT To command data After DATA command. eoh After DATA command and header eom After DATA command and terminating ``.'' By default the lists of macros are empty. Example: O Milter.LogLevel=12 O Milter.macros.connect=j, _, {daemon_name} MinFreeBlocks=N [b] Insist on at least N blocks free on the filesystem that holds the queue files before accepting email via SMTP. If there is insuf- ficient space sendmail gives a 452 response to the MAIL command. This invites the sender to try again later. MinQueueAge=age [no short name] Don't process any queued jobs that have been in the queue less than the indicated time interval. This is intended to allow you to get responsiveness by processing the queue fairly frequently without thrashing your system by trying jobs too often. The default units are minutes. Note: This option is ignored for queue runs that select a subset of the queue, i.e., "-q[!][I|R|S|Q][string]" MustQuoteChars=s [no short name] Sets the list of characters that must be quoted if used in a full name that is in the phrase part of a ``phrase <address>'' syntax. The default is ``'.''. The characters ``@,;:\()[]'' are always added to this list. NiceQueueRun [no short name] The priority of queue runners (nice(3)). This value must be greater or equal zero. NoRecipientAction [no short name] The action to take when you receive a message that has no valid reci- pient headers (To:, Cc:, Bcc:, or Apparently-To: - the last included for back Sendmail Installation and Operation Guide SMM:08-115 compatibility with old sendmails). It can be None to pass the message on unmodified, which violates the protocol, Add-To to add a To: header with any recipients it can find in the envelope (which might expose Bcc: recipients), Add-Apparently-To to add an Apparently-To: header (this is only for back-compatibility and is officially depre- cated), Add-To-Undisclosed to add a header "To: undisclosed-recipients:;" to make the header legal without disclosing anything, or Add-Bcc to add an empty Bcc: header. OldStyleHeaders [o] Assume that the headers may be in old format, i. Defaults to off. OperatorChars=charlist [$o macro] The list of characters that are considered to be "operators", that is, char- acters that delimit tokens. All operator characters are tokens by themselves; sequences of non-operator characters are also tokens. White space characters separate tokens but are not tokens themselves - for example, "AAA.BBB" has three tokens, but "AAA BBB" has two. If not set, OperatorChars defaults to ".:@[]"; additionally, the char- acters "()<>,;" are always operators. Note that OperatorChars must be set in the confi- guration file before any rulesets. PidFile=filename [no short name] Filename of the pid file. (default is _PATH_SENDMAILPID). The filename is macro-expanded before it is opened, and unlinked when sendmail exits. PostmasterCopy=postmaster [P] If set, copies of error messages will be sent to the named postmaster. Only the header of the failed message is sent. Errors resulting from messages with a negative pre- cedence will not be sent. Since most errors are user problems, this is probably not a good idea on large sites, and arguably SMM:08-116 Sendmail Installation and Operation Guide contains all sorts of privacy violations, but it seems to be popular with certain operating systems vendors. The address is macro expanded at the time of delivery. Defaults to no postmaster copies.,[20] nobodyreturn Don't return the body of a message with DSNs goaway Disallow essentially all SMTP status queries authwarnings Put X-Authentication-Warning: headers in messages and log warnings noactualrecipient Don't put X-Actual-Recipient lines in DSNs which reveal the actual account that addresses map to. The "goaway" pseudo-flag sets all flags except "noreceipts", "restrictmailq", "res- trictqrun", "restrictexpand", "noetrn", and "nobodyreturn". If mailq is restricted, only people in the same group as the queue direc- tory can print the queue. If queue runs are restricted, only root and the owner of the queue directory can run the queue. The "res- tric ____________________ [20]N.B.: the noreceipts flag turns off support for RFC 1891 (Delivery Status Notification). Sendmail Installation and Operation Guide SMM:08-117 overrides the -v (verbose) command line option to prevent information leakage. Authentication Warnings add warnings about various conditions that may indicate attempts to spoof the mail system, such as using a non-standard queue directory. ProcessTitlePrefix=string [no short name] Prefix the process title shown on 'ps' listings with string. The string will be macro processed. QueueDirectory=dir [Q] The QueueDirectory option serves two purposes. First, it specifies the directory or set of directories that comprise the default queue group. Second, it specifies the directory D which is the ancestor of all queue directories, and which sendmail uses as its current working directory. When send- mail dumps core, it leaves its core files in D. There are two cases. If dir ends with an asterisk (eg, /var/spool/mqueue/qd*), then all of the directories or symbolic links to directories beginning with `qd' in /var/spool/mqueue will be used as queue directories of the default queue group, and /var/spool/mqueue will be used as the work- ing directory D. Otherwise, dir must name a directory (usually /var/spool/mqueue): the default queue group consists of the single queue directory dir, and the working direc- tory D is set to dir. To define additional groups of queue directories, use the confi- guration file `Q' command. Do not change the queue directory structure while sendmail is running. QueueFactor=factor [q] Use factor as the multiplier in the map function to decide when to just queue up jobs rather than run them. This value is divided by the difference between the current load average and the load average limit (QueueLA option) to determine the max- imum message priority that will be sent. Defaults to 600000. QueueLA=LA [x] When the system load average exceeds LA and the QueueFactor (q) option divided by the difference in the current load average and the QueueLA option plus one is less than SMM:08-118 Sendmail Installation and Operation Guide the priority of the message, just queue mes- sages (i.e., don't try to send them). Defaults to 8 multiplied by the number of processors online on the system (if that can be determined). QueueFileMode=mode [no short name] Default permissions for queue files (octal). If not set, sendmail uses 0600 unless its real and effective uid are different in which case it uses 0644. QueueSortOrder=algorithm [no short name] con- nection cache, but may tend to process low priority messages that go to a single host over high priority messages that go to several hosts; it probably shouldn't be used on slow network links. Filename and modifi- cation time ordering saves the overhead of reading all of the queued items before starting the queue run. Creation (submis- sion) will be working on different parts of the queue at the same time. Priority ord- ering is the default. QueueTimeout=timeout [T] A synonym for "Timeout.queuereturn". Use that form instead of the "QueueTimeout" form. RandFile [no short name] Name of file containing ran- dom data or the name of the UNIX socket if EGD is used. A (required) prefix "egd:" or Sendmail Installation and Operation Guide SMM:08-119 "file:" specifies the type. STARTTLS requires this filename if the compile flag HASURANDOMDEV is not set (see sendmail/README). ResolverOptions=options [I] Set resolver options. Values can be set using +flag and cleared using -flag; the flags can be "debug", "aaonly", "usevc", "primary", "igntc", "recurse", "defnames", "stayopen", "use_inet6", or "dnsrch". The string "HasWildcardMX" (without a + or -) can be specified to turn off matching against MX records when doing name canonifi- cations. The string "WorkAroundBrokenAAAA" (without a + or -) can be specified to work around some broken nameservers which return SERVFAIL (a temporary failure) on T_AAAA (IPv6) lookups. Notice: it might be neces- sary to apply the same (or similar) options to submit.cf too. RequiresDirfsync [no short name] This option can be used to override the compile time flag REQUIRES_DIR_FSYNC at runtime by setting it to false. If the compile time flag is not set, the option is ignored. The flag turns on support for file systems that require to call fsync() for a directory if the meta- data in it has been changed. This should be turned on at least for older versions of ReiserFS; it is enabled by default for Linux. According to some information this flag is not needed anymore for kernel 2.4.16 and newer. RrtImpliesDsn [R] If this option is set, a "Return- Receipt-To:" header causes the request of a DSN, which is sent to the envelope sender as required by RFC 1891, not to the address given in the header. RunAsUser=user [no short name] The user parameter may be a user name (looked up in /etc/passwd) or a numeric user id; either form can have ":group" attached (where group can be numeric or symbolic). If set to a non-zero (non-root) value, sendmail will change to SMM:08-120 Sendmail Installation and Operation Guide this user id shortly after startup[21]. This avoids a certain class of security problems. However, this means that all ".forward" and ":include:" files must be readable by the indicated user and all files to be written must be writable by user Also, all file and program deliveries will be marked unsafe unless the option DontBlameSendmail=NonRootSafeAddr is set, in which case the delivery will be done as user. It is also incompatible with the Safe- FileEnvironment option. In other words, it may not actually add much to security on an average system, and may in fact detract from security (because other file permissions must be loosened). However, it should be useful on firewalls and other places where users don't have accounts and the aliases file is well constrained. RecipientFactor=fact [y] The indicated factor is added to the priority (thus lowering the priority of the job) for each recipient, i.e., this value penalizes jobs with large numbers of reci- pients. Defaults to 30000. RefuseLA=LA [X] When the system load average exceeds LA, refuse incoming SMTP connections. Defaults to 12 multiplied by the number of processors online on the system (if that can be deter- mined). RejectLogInterval=timeout [no short name] Log interval when refusing connections for this long (default: 3h). RetryFactor=fact [Z] The factor. ____________________ [21]When running as a daemon, it changes to this user after accepting a connection but before reading any SMTP commands. Sendmail Installation and Operation Guide SMM:08-121 SafeFileEnvironment=dir [no short name] If this option is set, send- mail will do a chroot(2) call into the indi- cated directory before doing any file writes. If the file name specified by the user begins with dir, that partial path name will be stripped off before writing, so (for example) if the SafeFileEnvironment variable is set to "/safe" then aliases of "/safe/logs/file" and "/logs/file" actually indicate the same file. Additionally, if this option is set, sendmail refuses to deliver to symbolic links. SaveFromLine [f] Save UNIX-style "From" lines at the front of headers. Normally they are assumed redundant and discarded. SendMimeErrors [j] If set, send error messages in MIME for- mat (see RFC 2045 and RFC 1344 for details). If disabled, sendmail will not return the DSN keyword in response to an EHLO and will not do Delivery Status Notification process- ing as described in RFC 1891. ServerCertFile [no short name] File containing the certifi- cate of the server, i.e., this certificate is used when sendmail acts as server (used for STARTTLS). ServerKeyFile [no short name] File containing the private key belonging to the server certificate (used for STARTTLS). ServiceSwitchFile=filename [no short name] If your host operating sys- tem has a service switch abstraction (e.g., /etc/nsswitch.conf on Solaris or /etc/svc.conf on Ultrix and DEC OSF/1) that service will be consulted and this option is ignored. Otherwise, this is the name of a file that provides the list of methods used to implement particular services. The syntax is a series of lines, each of which is a sequence of words. The first word is the service name, and following words are ser- vice types. The services that sendmail con- sults directly are "aliases" and "hosts." Service types can be "dns", "nis", SMM:08-122 Sendmail Installation and Operation Guide "nisplus", or "files" (with the caveat that the appropriate support must be compiled in before the service can be referenced). If ServiceSwitchFile is not specified, it defaults to /etc/mail/service.switch. If that file does not exist, the default switch is: aliases files hosts dns nis files The default file is "/etc/mail/service.switch". SevenBitInput [7] Strip input to seven bits for compati- bility with old systems. This shouldn't be necessary. SharedMemoryKey [no short name] Key to use for shared memory segment; if not set (or 0), shared memory will not be used. If set to -1 sendmail can select a key itself provided that also SharedMemoryKeyFile is set. Requires support for shared memory to be compiled into send- mail. [no short name] If SharedMemoryKey is set to -1 then the automatically selected shared memory key will be stored in the specified file. SingleLineFromHeader [no short name] If set, From: lines that have embedded newlines are unwrapped onto one line. This is to get around a botch in Lotus Notes that apparently cannot under- stand legally wrapped RFC 822 headers. SingleThreadDelivery [no short name] If set, a client machine will never try to open two SMTP connections to a single server machine at the same time, Sendmail Installation and Operation Guide SMM:08-123 even in different processes. That is, if another sendmail is already talking to some host a new sendmail will not open another connection. This property is of mixed value; although this reduces the load on the other machine, it can cause mail to be delayed (for example, if one sendmail is delivering a huge message, other sendmails won't be able to send even small messages). Also, it requires another file descriptor (for the lock file) per connection, so you may have to reduce the ConnectionCacheSize option to avoid running out of per-process file descriptors. Requires the HostStatusDirec- tory option. SmtpGreetingMessage=message [$e macro] The message printed when the SMTP server starts up. Defaults to "$j Sendmail $v ready at $b". SoftBounce If set, issue temporary errors (4xy) instead of permanent errors (5xy). This can be use- ful during testing of a new configuration to avoid erroneous bouncing of mails. StatusFile=file [S] Log summary statistics in the named file. If no file name is specified, "statis- tics" is used. If not set, no summary statistics are saved. This file does not grow in size. It can be printed using the mailstats(8) program. SuperSafe [s] This option can be set to True, False, Interactive, or PostMilter. If set to True, sendmail will be super-safe when running things, i.e., always instantiate the queue file, even if you are going to attempt immediate delivery. Sendmail always instan- tiates the queue file before returning con- trol to the client under any circumstances. This should really always be set to True. The Interactive value has been introduced in 8.12 and can be used together with DeliveryMode=i. It skips some synchroniza- tion calls which are effectively doubled in the code execution path for this mode. If set to PostMilter, sendmail defers synchron- izing the queue file until any milters have signaled acceptance of the message. Post- Milter is useful only when sendmail is SMM:08-124 Sendmail Installation and Operation Guide running as an SMTP server; in all other situations it acts the same as True. TLSSrvOptions [no short name] List of options for SMTP STARTTLS for the server consisting of single characters with intervening white space or commas. The flag ``V'' disables client verification, and hence it is not possible to use a client certificate for relaying. Currently there are no other flags avail- able. TempFileMode=mode [F] The file mode for transcript files, files to which sendmail delivers directly, files in the HostStatusDirectory, and Sta- tusFile. It is interpreted in octal by default. Defaults to 0600. Timeout.type=timeout [r; subsumes old T option as well] Set timeout values. For more information, see section 4.1. TimeZoneSpec=tzinfo [t] Set vari- able is set to this value. TrustedUser=user [no short name] The user parameter may be a user name (looked up in /etc/passwd) or a numeric user id. Trusted user for file own- ership and starting the daemon. If set, generated alias databases and the control socket (if configured) will automatically be owned by this user. TryNullMXList [w] If this system is the "best" (that is, lowest preference) MX for a given host, its configuration rules should normally detect this situation and treat that condition spe- cially by forwarding the mail to a UUCP feed, treating it as local, or whatever. However, in some cases (such as Internet firewalls) you may want to try to connect directly to that host as though it had no MX Sendmail Installation and Operation Guide SMM:08-125 records at all. Setting this option causes sendmail to try this. The downside is that errors in your configuration are likely to be diagnosed as "host unknown" or "message timed out" instead of something more mean- ingful. This option is disrecommended. UnixFromLine=fromline [$l macro] Defines the format used when sendmail must add a UNIX-style From_ line (that is, a line beginning "From<space>user"). Defaults to "From $g $d". Don't change this unless your system uses a different UNIX mailbox format (very unlikely). UnsafeGroupWrites [no short name] If set (default), :include: and .forward files that are group writable are considered "unsafe", that is, they can- not reference programs or write directly to files. World writable :include: and .forward files are always unsafe. Note: use DontBlameSendmail instead; this option is deprecated. UseErrorsTo [l] If there is an "Errors-To:" header, send error messages to the addresses listed there. They normally go to the envelope sender. Use of this option causes sendmail to violate RFC 1123. This option is disrecommended and deprecated. UserDatabaseSpec=udbspec [U] The user database specification. Verbose [v] Run in verbose mode. If this is set, sendmail adjusts options HoldExpensive (old c) and DeliveryMode (old d) so that all mail is delivered completely in a single job so that you can see the entire delivery pro- cess. Option Verbose should never be set in the configuration file; it is intended for command line use only. Note that the use of option Verbose can cause authentication information to leak, if you use a sendmail client to authenticate to a server. If the authentication mechanism uses plain text passwords (as with LOGIN or PLAIN), then the password could be compromised. To avoid this, do not install sendmail set-user-ID root, and disable the VERB SMTP command with SMM:08-126 Sendmail Installation and Operation Guide a suitable PrivacyOptions setting. XscriptFileBufferSize=threshold [no short name] Set the threshold, in bytes, before a memory-based queue transcript file becomes disk-based. The default is 4096 bytes. All options can be specified on the command line using the -O or -o flag, but most will cause sendmail to relinquish its set-user-ID permissions. The options that will not cause this are SevenBitInput [7], Eight- BitMode [8], MinFreeBlocks [b], CheckpointInterval [C], DeliveryMode [d], ErrorMode [e], IgnoreDots [i], SendMimeErrors [j], LogLevel [L], MeToo [m], OldStyle- Headers [o], PrivacyOptions [p], SuperSafe [s], Ver- bose [v], QueueSortOrder, MinQueueAge, DefaultCharSet, Dial Delay, NoRecipientAction, ColonOkInAddr, Max- QueueRunSize, SingleLineFromHeader, and AllowBo- gusHELO. Actually, PrivacyOptions [p] given on the command line are added to those already specified in the sendmail.cf file, i.e., they can't be reset. Also, M (define macro) when defining the r or s macros is also considered "safe". 5.7. P -- Precedence Definitions Values for the "Precedence:" field may be defined using the P control line. The syntax of this field is: Pname=num When the name is found in a "Precedence:" field, the message class is set to num. Higher numbers mean higher precedence. Numbers less than zero have the special property that if an error occurs during pro- cessing the body of the message will not be returned; this is expected to be used for "bulk" mail such as through mailing lists. The default precedence is zero. For example, our list of precedences is: Pfirst-class=0 Pspecial-delivery=100 Plist=-30 Pbulk=-60 Pjunk=-100 People writing mailing list exploders are encouraged to use "Precedence: list". Older versions of sendmail (which discarded all error returns for negative pre- cedences) didn't recognize this name, giving it a default precedence of zero. This allows list main- tainers to see error returns on both old and new Sendmail Installation and Operation Guide SMM:08-127 versions of sendmail. 5.8. V -- Configuration Version Level To provide compatibility with old configuration files, the V line has been added to define some very basic semantics of the configuration file. These are not intended to be long term supports; rather, they describe compatibility features which will probably be removed in future releases. N.B.: these version levels have nothing to do with the version number on the files. For example, as of this writing version 10 config files (specifically, 8.10) used version level 9 configurations. "Old" configuration files are defined as version level one. Version level two files make the following changes: (1) Host name canonification ($[ ... $]) appends a dot if the name is recognized; this gives the config file a way of finding out if anything matched. (Actually, this just initializes the "host" map with the "-a." flag -- you can reset it to anything you prefer by declaring the map explicitly.) (2) Default host name extension is consistent throughout processing; version level one confi- gurations turned off domain extension (that is, adding the local domain name) during certain points in processing. Version level two confi- gurations are expected to include a trailing dot to indicate that the name is already canon- ical. (3) Local names that are not aliases are passed through a new distinguished ruleset five; this can be used to append a local relay. This behavior can be prevented by resolving the local name with an initial `@'. That is, some- thing that resolves to a local mailer and a user name of "vikki" will be passed through ruleset five, but a user name of "@vikki" will have the `@' stripped, will not be passed through ruleset five, but will otherwise be treated the same as the prior example. The expectation is that this might be used to implement a policy where mail sent to "vikki" was handled by a central hub, but mail sent to "vikki@localhost" was delivered directly. SMM:08-128 Sendmail Installation and Operation Guide Version level three files allow # initiated com- ments on all lines. Exceptions are backslash escaped # marks and the $# syntax. Version level four configurations are completely equivalent to level three for historical reasons. Version level five configuration files change the default definition of $w to be just the first com- ponent of the hostname. Version level six configuration files change many of the local processing options (such as aliasing and matching the beginning of the address for `|' charac- ters) to be mailer flags; this allows fine-grained control over the special local processing. Level six configuration files may also use long option names. The ColonOkInAddr option (to allow colons in the local-part of addresses) defaults on for lower num- bered configuration files; the configuration file requires some additional intelligence to properly han- dle the RFC 822 group construct. Version level seven configuration files used new option names to replace old macros ($e became SmtpGreetingMessage, $l became UnixFromLine, and $o became OperatorChars. Also, prior to version seven, the F=q flag (use 250 instead of 252 return value for SMTP VRFY commands) was assumed. Version level eight configuration files allow $# on the left hand side of ruleset lines. Version level nine configuration files allow parentheses in rulesets, i.e. they are not treated as comments and hence removed. Version level ten configuration files allow queue group definitions. The V line may have an optional /vendor to indi- cate that this configuration file uses modifications specific to a particular vendor[22]. You may use "/Berkeley" to emphasize that this configuration file uses the Berkeley dialect of sendmail. ____________________ [22]And of course, vendors are encouraged to add them- selves to the list of recognized vendors by editing the rou- tine setvendor in conf.c. Please send e-mail to sendmail@Sendmail.ORG to register your vendor dialect. Sendmail Installation and Operation Guide SMM:08-129 5.9. K -- Key File Declaration Special maps can be defined using the line: Kmapname mapclass arguments The mapname is the handle by which this map is refer- enced in the rewriting rules. The mapclass is the name of a type of map; these are compiled in to sendmail. The arguments are interpreted depending on the class; typically, there would be a single argument naming the file containing the map. Maps are referenced using the syntax: $( map key $@ arguments $: default $) where either or both of the arguments or default por- tion may be omitted. The $@ arguments may appear more than once. The indicated key and arguments are passed to the appropriate mapping function. If it returns a value, it replaces the input. If it does not return a value and the default is specified, the default replaces the input. Otherwise, the input is unchanged. The arguments are passed to the map for arbitrary use. Most map classes can interpolate these arguments into their values using the syntax "%n" (where n is a digit) to indicate the corresponding argument. Argu- ment "%0" indicates the database key. For example, the rule R$- ! $+ $: $(uucp $1 $@ $2 $: $2 @ $1 . UUCP $) Looks up the UUCP name in a (user defined) UUCP map; if not found it turns it into ".UUCP" form. The data- base might contain records like: decvax %1@%0.DEC.COM research %1@%0.ATT.COM Note that default clauses never do this mapping. The built-in map with both name and class "host" is the host name canonicalization lookup. Thus, the syntax: $(host hostname$) is equivalent to: $[hostname$] SMM:08-130 Sendmail Installation and Operation Guide There are many defined classes. dbm Database lookups using the ndbm(3) library. Sendmail must be compiled with NDBM defined. btree Database lookups using the btree interface to the Berkeley DB library. Sendmail must be compiled with NEWDB defined. hash Database lookups using the hash interface to the Berkeley DB library. Sendmail must be compiled with NEWDB defined. nis NIS lookups. Sendmail must be compiled with NIS defined. nisplus NIS+ lookups. Sendmail must be compiled with NISPLUS defined. The argument is the name of the table to use for lookups, and the -k and -v flags may be used to set the key and value columns respectively. hesiod Hesiod lookups. Sendmail must be compiled with HESIOD defined. ldap LDAP X500 directory lookups. Sendmail must be compiled with LDAPMAP defined. The map supports most of the standard arguments and most of the command line arguments of the ldapsearch program. Note that, by default, if a single query matches multiple values, only the first value will be returned unless the -z (value separator) map flag is set. Also, the -1 map flag will treat a multiple value return as if there were no matches. netinfo NeXT NetInfo lookups. Sendmail must be com- piled with NETINFO defined. text Text file lookups. The format of the text file is defined by the -k (key field number), -v (value field number), and -z (field delimiter) flags. ph PH query map. Contributed and supported by Mark Roth, roth@uiuc.edu. For more informa- tion, consult the web site "- dev.cites.uiuc.edu/sendmail/". nsd nsd map for IRIX 6.5 and later. Contributed and supported by Bob Mende of SGI, mende@sgi.com. Sendmail Installation and Operation Guide SMM:08-131 stab Internal symbol table lookups. Used inter- nally for aliasing. implicit Really should be called "alias" - this is used to get the default lookups for alias files, and is the default if no class is specified for alias files. user Looks up users using getpwnam(3). The -v flag can be used to specify the name of the field to return (although this is normally used only to check the existence of a user). host Canonifies host domain names. Given a host name it calls the name server to find the canonical name for that host. bestmx Returns the best MX record for a host name given as the key. The current machine is always preferred -- that is, if the current machine is one of the hosts listed as a lowest-preference MX record, then it will be guaranteed to be returned. This can be used to find out if this machine is the target for an MX record, and mail can be accepted on that basis. If the -z flag is given, then all MX names are returned, separated by the given delimiter. dns This map requires the option -R to specify the DNS resource record type to lookup. The following types are supported: A, AAAA, AFSDB, CNAME, MX, NS, PTR, SRV, and TXT. A map lookup will return only one record. Hence for some types, e.g., MX records, the return value might be a random element of the list due to randomizing in the DNS resolver. sequence The arguments on the `K' line are a list of maps; the resulting map searches the argu- ment maps in order until it finds a match for the indicated key. For example, if the key definition is: Kmap1 ... Kmap2 ... Kseqmap sequence map1 map2 then a lookup against "seqmap" first does a lookup in map1. If that is found, it returns immediately. Otherwise, the same key is used for map2. SMM:08-132 Sendmail Installation and Operation Guide syslog the key is logged via syslogd(8). The lookup returns the empty string. switch Much like the "sequence" map except that the order of maps is determined by the service switch. The argument is the name of the ser- vice to be looked up; the values from the service switch are appended to the map name to create new map names. For example, con- sider the key definition: Kali switch aliases together with the service switch entry: aliases nis files This causes a query against the map "ali" to search maps named "ali.nis" and "ali.files" in that order. dequote Strip double quotes (") from a name. It does not strip backslashes, and will not strip quotes if the resulting string would contain unscannable syntax (that is, basic errors like unbalanced angle brackets; more sophis- ticated errors such as unknown hosts are not checked). The intent is for use when trying to accept mail from systems such as DECnet that routinely quote odd syntax such as "49ers::ubell" A typical usage is probably something like: Kdequote dequote ... R$- $: $(dequote $1 $) R$- $+ $: $>3 $1 $2 Care must be taken to prevent unexpected results; for example, "|someprogram < input > output" will have quotes stripped, but the result is probably not what you had in mind. For- tunately these cases are rare. regex The map definition on the K line contains a regular expression. Any key input is Sendmail Installation and Operation Guide SMM:08-133 compared to that expression using the POSIX regular expressions routines regcomp(), regerr(), and regexec(). Refer to the docu- mentation for those routines for more infor- mation about the regular expression match- ing. No rewriting of the key is done if the -m flag is used. Without it, the key is discarded or if -s if used, it is substi- tuted by the substring matches, delimited by $| or the string specified with the the -d flag. The flags available for the map are -n not -f case sensitive -b basic regular expressions (default is extended) -s substring match -d set the delimiter used for -s -a append string to key -m match only, do not replace/discard value -D perform no lookup in deferred delivery mode. The -s flag can include an optional parame- ter which can be used to select the sub- strings in the result of the lookup. For example, -s1,3,4 Notes: to match a $ in a string, \$$ must be used. If the pattern contains spaces, they must be replaced with the blank substitution character, unless it is space itself. program The arguments on the K line are the pathname to a program and any initial parameters to be passed. When the map is called, the key is added to the initial parameters and the program is invoked as the default user/group id. The first line of standard output is returned as the value of the lookup. This has many potential security problems, and has terrible performance; it should be used only when absolutely necessary. macro Set or clear a macro value. To set a macro, pass the value as the first argument in the map lookup. To clear a macro, do not pass an argument in the map lookup. The map always returns the empty string. Example of typical usage include: SMM:08-134 Sendmail Installation and Operation Guide Kstorage macro ... # set macro ${MyMacro} to the ruleset match R$+ $: $(storage {MyMacro} $@ $1 $) $1 # set macro ${MyMacro} to an empty string R$* $: $(storage {MyMacro} $@ $) $1 # clear macro ${MyMacro} R$- $: $(storage {MyMacro} $) $1 arith Perform simple arithmetic operations. The operation is given as key, currently +, -, *, /, %, |, & (bitwise OR, AND), l (for less than), =, and r (for random) are supported. The two operands are given as arguments. The lookup returns the result of the computa- tion, i.e., TRUE or FALSE for comparisons, integer values otherwise. The r operator returns a pseudo-random number whose value lies between the first and second operand (which requires that the first operand is smaller than the second). All options which are possible for maps are ignored. A simple example is: Kcomp arith ... Scheck_etrn R$* $: $(comp l $@ $&{load_avg} $@ 7 $) $1 RFALSE$# error ... socket The socket map uses a simple request/reply protocol over TCP or UNIX domain sockets to query an external server. Both requests and replies are text based and encoded as net- strings, i.e., a string "hello there" becomes: 11:hello there, Note: neither requests nor replies end with CRLF. The request consists of the database map name and the lookup key separated by a space character: Sendmail Installation and Operation Guide SMM:08-135 <mapname> ' ' <key> The server responds with a status indicator and the result (if any): <status> ' ' <result> The status indicator specifies the result of the lookup operation itself and is one of the following upper case words: OK the key was found, result contains the looked up value NOTFOUND the key was not found, the result is empty TEMP a temporary failure occured TIMEOUT a timeout occured on the server side PERM a permanent failure occured In case of errors (status TEMP, TIMEOUT or PERM) the result field may contain an expla- natory message. However, the explanatory message is not used any further by sendmail. Example replies: 31:OK resolved.address@example.com, 56:OK error:550 5.7.1 User does not accept mail from sender, in case of successful lookups, or: 8:NOTFOUND, in case the key was not found, or: 55:TEMP this text explains that we had a temporary failure, in case of a temporary map lookup failure. The socket map uses the same syntax as milters (see Section "X -- Mail Filter (Milter) Definitions") to specify the remote endpoint, e.g., Ksocket mySocketMap inet:12345@127.0.0.1 SMM:08-136 Sendmail Installation and Operation Guide If multiple socket maps define the same remote endpoint, they will share a single connection to this endpoint. Most of these accept as arguments the same optional flags and a filename (or a mapname for NIS; the filename is the root of the database path, so that ".db" or some other extension appropriate for the database type will be added to get the actual database name). Known flags are: -o Indicates that this map is optional -- that is, if it cannot be opened, no error is pro- duced, and sendmail will behave as if the map existed but was empty. -N, -O If neither -N or -O are specified, sendmail uses an adaptive algorithm to decide whether or not to look for null bytes on the end of keys. It starts by trying both; if it finds any key with a null byte it never tries again without a null byte and vice versa. If -N is specified it never tries without a null byte and if -O is specified it never tries with a null byte. Setting one of these can speed matches but are never necessary. If both -N and -O are specified, sendmail will never try any matches at all - that is, everything will appear to fail. -ax Append the string x on successful matches. For example, the default host map appends a dot on successful matches. -Tx Append the string x on temporary failures. For example, x would be appended if a DNS lookup returned "server failed" or an NIS lookup could not locate a server. See also the -t flag. -f Do not fold upper to lower case before look- ing up the key. -m Match only (without replacing the value). If you only care about the existence of a key and not the value (as you might when search- ing the NIS map "hosts.byname" for example), this flag prevents the map from substituting the value. However, The -a argument is still appended on a match, and the default is still taken if the match fails. Sendmail Installation and Operation Guide SMM:08-137 -kkeycol The key column name (for NIS+) or number (for text lookups). For LDAP maps this is an LDAP filter string in which %s is replaced with the literal contents of the lookup key and %0 is replaced with the LDAP escaped contents of the lookup key according to RFC 2254. If the flag -K is used, then %1 through %9 are replaced with the LDAP escaped contents of the arguments specified in the map lookup. -vvalcol The value column name (for NIS+) or number (for text lookups). For LDAP maps this is the name of one or more attributes to be returned; multiple attributes can be separated by commas. If not specified, all attributes found in the match will be returned. The attributes listed can also include a type and one or more objectClass values for matching as described in the LDAP section. -zdelim The column delimiter (for text lookups). It can be a single character or one of the spe- cial strings "\n" or "\t" to indicate new- line or tab respectively. If omitted entirely, the column separator is any sequence of white space. For LDAP maps this is the separator character to combine multi- ple values into a single return string. If not set, the LDAP lookup will only return the first match found. For DNS maps this is the separator character at which the result of a query is cut off if is too long. -t Normally, when a map attempts to do a lookup and the server fails (e.g., sendmail couldn't contact any name server; this is not the same as an entry not being found in the map), the message being processed is queued for future processing. The -t flag turns off this behavior, letting the tem- porary failure (server down) act as though it were a permanent failure (entry not found). It is particularly useful for DNS lookups, where someone else's misconfigured name server can cause problems on your machine. However, care must be taken to ensure that you don't bounce mail that would be resolved correctly if you tried again. A common strategy is to forward such mail to another, possibly better connected, mail server. SMM:08-138 Sendmail Installation and Operation Guide -D Perform no lookup in deferred delivery mode. This flag is set by default for the host map. -Sspacesub The character to use to replace space char- acters after a successful map lookup (esp. useful for regex and syslog maps). -sspacesub For the dequote map only, the character to use to replace space characters after a suc- cessful dequote. -q Don't dequote the key before lookup. -Llevel For the syslog map only, it specifies the level to use for the syslog call. -A When rebuilding an alias file, the -A flag causes duplicate entries in the text version to be merged. For example, two entries: list: user1, user2 list: user3 would be treated as though it were the sin- gle entry list: user1, user2, user3 in the presence of the -A flag. Some additional flags are available for the host and dns maps: -d delay: specify the resolver's retransmission time interval (in seconds). -r retry: specify the number of times to retransmit a resolver query. The dns map has another flag: -B basedomain: specify a domain that is always appended to queries. The following additional flags are present in the ldap map only: -R Do not auto chase referrals. sendmail must be compiled with -DLDAP_REFERRALS to use this flag. Sendmail Installation and Operation Guide SMM:08-139 -n Retrieve attribute names only. -Vsep Retrieve both attributes name and value(s), separated by sep. -rderef Set the alias dereference option to one of never, always, search, or find. -sscope Set search scope to one of base, one (one level), or sub (subtree). -hhost LDAP server hostname. Some LDAP libraries allow you to specify multiple, space- separated hosts for redundancy. In addition, each of the hosts listed can be followed by a colon and a port number to override the default LDAP port. -pport LDAP service port. -H LDAPURI Use the specified LDAP URI instead of speci- fying the hostname and port separately with the the -h and -p options shown above. For example, -h server.example.com -p 389 -b dc=example,dc=com is equivalent to -H ldap://server.example.com:389 -b dc=example,dc=com If the LDAP library supports it, the LDAP URI format however can also request LDAP over SSL by using ldaps:// instead of ldap://. For example: O LDAPDefaultSpec=-H ldaps://ldap.example.com -b dc=example,dc=com Similarly, if the LDAP library supports it, It can also be used to specify a UNIX domain socket using ldapi://: O LDAPDefaultSpec=-H ldapi://socketfile -b dc=example,dc=com -bbase LDAP search base. -ltimelimit Time limit for LDAP queries. -Zsizelimit Size (number of matches) limit for LDAP or SMM:08-140 Sendmail Installation and Operation Guide DNS queries. -ddistinguished_name The distinguished name to use to login to the LDAP server. -Mmethod The method to authenticate to the LDAP server. Should be one of LDAP_AUTH_NONE, LDAP_AUTH_SIMPLE, or LDAP_AUTH_KRBV4. -Ppasswordfile The file containing the secret key for the LDAP_AUTH_SIMPLE authentication method or the name of the Kerberos ticket file for LDAP_AUTH_KRBV4. -1 Force LDAP searches to only succeed if a single match is found. If multiple values are found, the search is treated as if no match was found. -wversion Set the LDAP API/protocol version to use. The default depends on the LDAP client libraries in use. For example, -w 3 will cause sendmail to use LDAPv3 when communi- cating with the LDAP server. -K Treat the LDAP search key as multi-argument and replace %1 through %9 in the key with the LDAP escaped contents of the lookup arguments specified in the map lookup. The dbm map appends the strings ".pag" and ".dir" to the given filename; the hash and btree maps append ".db". For example, the map specification Kuucp dbm -o -N /etc/mail/uucpmap specifies an optional map named "uucp" of class "dbm"; it always has null bytes at the end of every string, and the data is located in /etc/mail/uucpmap.{dir,pag}. The program makemap(8) can be used to build any of the three database-oriented maps. It takes the fol- lowing flags: -f Do not fold upper to lower case in the map. -N Include null bytes in keys. -o Append to an existing (old) file. Sendmail Installation and Operation Guide SMM:08-141 -r Allow replacement of existing keys; nor- mally, re-inserting an existing key is an error. -v Print what is happening. The sendmail daemon does not have to be restarted to read the new maps as long as you change them in place; file locking is used so that the maps won't be read while they are being updated. New classes can be added in the routine setupmaps in file conf.c. 5.10. Q -- Queue Group Declaration In addition to the option QueueDirectory, queue groups can be declared that define a (group of) queue directories under a common name. The syntax is as fol- lows: Qname {, field=value}+ where name is the symbolic name of the queue group under which it can be referenced in various places and the "field=value" pairs define attributes of the queue group. The name must only consist of alphanumeric characters. Fields are: Flags Flags for this queue group. Nice The nice(2) increment for the queue group. This value must be greater or equal zero. Interval The time between two queue runs. Path The queue directory of the group (required). Runners The number of parallel runners processing the queue. Note that F=f must be set if this value is greater than one. Jobs The maximum number of jobs (messages delivered) per queue run. recipients The maximum number of recipients per envelope. Envelopes with more than this number of recipients will be split into mul- tiple envelopes in the same queue directory. The default value 0 means no limit. SMM:08-142 Sendmail Installation and Operation Guide Only the first character of the field name is checked. By default, a queue group named mqueue is defined that uses the value of the QueueDirectory option as path. Notice: all paths that are used for queue groups must be subdirectories of QueueDirectory. Since they can be symbolic links, this isn't a real restriction, If QueueDirectory uses a wildcard, then the directory one level up is considered the ``base'' directory which all other queue directories must share. Please make sure that the queue directories do not overlap, e.g., do not specify O QueueDirectory=/var/spool/mqueue/* Qone, P=/var/spool/mqueue/dir1 Qtwo, P=/var/spool/mqueue/dir2 because this also includes "dir1" and "dir2" in the default queue group. However, O QueueDirectory=/var/spool/mqueue/main* Qone, P=/var/spool/mqueue/dir Qtwo, P=/var/spool/mqueue/other* is a valid queue group specification. Options listed in the ``Flags'' field can be used to modify the behavior of a queue group. The ``f'' flag must be set if multiple queue runners are sup- posed to work on the entries in a queue group. Other- wise sendmail will work on the entries strictly sequentially. The ``Interval'' field sets the time between queue runs. If no queue group specific interval is set, then the parameter of the -q option from the com- mand line is used. To control the overall number of concurrently active queue runners the option MaxQueueChildren can be set. This limits the number of processes used for running the queues to MaxQueueChildren, though at any one time fewer processes may be active as a result of queue options, completed queue runs, system load, etc. The maximum number of queue runners for an indi- vidual queue group can be controlled via the Runners option. If set to 0, entries in the queue will not be processed, which is useful to ``quarantine'' queue files. The number of runners per queue group may also be set with the option MaxRunnersPerQueue, which applies to queue groups that have no individual limit. That is, the default value for Runners is Sendmail Installation and Operation Guide SMM:08-143 MaxRunnersPerQueue if set, otherwise 1. The field Jobs describes the maximum number of jobs (messages delivered) per queue run, which is the queue group specific value of MaxQueueRunSize. Notice: queue groups should be declared after all queue related options have been set because queue groups take their defaults from those options. If an option is set after a queue group declaration, the values of options in the queue group are set to the defaults of sendmail unless explicitly set in the declaration. Each envelope is assigned to a queue group based on the algorithm described in section ``Queue Groups and Queue Directories''. 5.11. X -- Mail Filter (Milter) Definitions The sendmail Mail Filter API (Milter) is designed to allow third-party programs access to mail messages as they are being processed in order to filter meta- information and content. They are declared in the con- figuration file as: Xname {, field=value}* where name is the name of the filter (used internally only) and the "field=name" pairs define attributes of the filter. Also see the documentation for the Input- MailFilters option for more information. Fields are: Socket The socket specification Flags Special flags for this filter Timeouts Timeouts for this filter Only the first character of the field name is checked (it's case-sensitive). The socket specification is one of the following forms: S=inet: port @ host S=inet6: port @ host S=local: path SMM:08-144 Sendmail Installation and Operation Guide The first two describe an IPv4 or IPv6 socket listen- ing on a certain port at a given host or IP address. The final form describes a named socket on the filesystem at the given path. The following flags may be set in the filter description. R Reject connection if filter unavailable. T Temporary fail connection if filter unavailable. If neither F=R nor F=T is specified, the message is passed through sendmail in case of filter errors as if the failing filters were not present. The timeouts can be set using the four fields inside of the T= equate: C Timeout for connecting to a filter. If set to 0, the system's connect() timeout will be used. S Timeout for sending information from the MTA to a filter. R Timeout for reading reply from the filter. E Overall timeout between sending end-of-message to filter and waiting for the final acknowledgment. Note the separator between each timeout field is a ';'. The default values (if not set) are: T=C:5m;S:10s;R:10s;E:5m where s is seconds and m is minutes. Examples: Xfilter1, S=local:/var/run/f1.sock, F=R Xfilter2, S=inet6:999@localhost, F=T, T=S:1s;R:1s;E:5m Xfilter3, S=inet:3333@localhost, T=C:2m 5.12. The User Database The user database is deprecated in favor of ``virtusertable'' and ``genericstable'' as explained in the file cf/README. If you have a version of send- mail with the user database package compiled in, the handling of sender and recipient addresses is modi- fied. The location of this database is controlled with the UserDatabaseSpec option. Sendmail Installation and Operation Guide SMM:08-145 5.12.1. Structure of the user database The database is a sorted (BTree-based) struc- ture. User records are stored with the key: user-name:field-name The sorted database format ensures that user records are clustered together. Meta-information is always stored with a leading colon. Field names define both the syntax and seman- tics of the value. Defined fields include: maildrop The delivery address for this user. There may be multiple values of this record. In particular, mailing lists will have one maildrop record for each user on the list. mailname The outgoing mailname for this user. For each outgoing name, there should be an appropriate maildrop record for that name to allow return mail. See also :default:mailname. mailsender Changes any mail sent to this address to have the indicated envelope sender. This is intended for mailing lists, and will normally be the name of an appropriate -request address. It is very similar to the owner-list syntax in the alias file. fullname The full name of the user. office-address The office address for this user. office-phone The office phone number for this user. office-fax The office FAX number for this user. home-address The home address for this user. home-phone The home phone number for this user. home-fax The home FAX number for this user. SMM:08-146 Sendmail Installation and Operation Guide project A (short) description of the project this person is affiliated with. In the Univer- sity this is often just the name of their graduate advisor. plan A pointer to a file from which plan information can be gathered. As of this writing, only a few of these fields are actually being used by sendmail: maildrop and mailname. A finger program that uses the other fields is planned. 5.12.2. User database semantics When the rewriting rules submit an address to the local mailer, the user name is passed through the alias file. If no alias is found (or if the alias points back to the same address), the name (with ":maildrop" appended) is then used as a key in the user database. If no match occurs (or if the maildrop points at the same address), forwarding is tried. If the first token of the user name returned by ruleset 0 is an "@" sign, the user database lookup is skipped. The intent is that the user database will act as a set of defaults for a clus- ter (in our case, the Computer Science Division); mail sent to a specific machine should ignore these defaults. When mail is sent, the name of the sending user is looked up in the database. If that user has a "mailname" record, the value of that record is used as their outgoing name. For example, I might have a record: eric:mailnameEric.Allman@CS.Berkeley.EDU This would cause my outgoing mail to be sent as Eric.Allman. If a "maildrop" is found for the user, but no corresponding "mailname" record exists, the record ":default:mailname" is consulted. If present, this is the name of a host to override the local host. For example, in our case we would set it to "CS.Berkeley.EDU". The effect is that anyone known in the database gets their outgoing mail stamped as "user@CS.Berkeley.EDU", but people not listed in the database use the local hostname. Sendmail Installation and Operation Guide SMM:08-147 5.12.3. Creating the database[23] The user database is built from a text file using the makemap utility (in the distribution in the makemap subdirectory). The text file is a series of lines corresponding to userdb records; each line has a key and a value separated by white space. The key is always in the format described above -- for example: eric:maildrop This file is normally installed in a system direc- tory; for example, it might be called /etc/mail/userdb. To make the database version of the map, run the program: makemap btree /etc/mail/userdb < /etc/mail/userdb Then create a config file that uses this. For exam- ple, using the V8 M4 configuration, include the following line in your .mc file: define(`confUSERDB_SPEC', /etc/mail/userdb) 6. OTHER CONFIGURATION There are some configuration changes that can be made by recompiling sendmail. This section describes what changes can be made and what has to be modified to make them. In most cases this should be unnecessary unless you are porting sendmail to a new environment. 6.1. Parameters in devtools/OS/$oscf These parameters are intended to describe the compilation environment, not site policy, and should normally be defined in the operating system configura- tion file. This section needs a complete rewrite. NDBM If set, the new version of the DBM library that allows multiple databases will be used. If neither NDBM nor NEWDB are set, a much less efficient method of alias lookup is used. ____________________ [23]These instructions are known to be incomplete. Other features are available which provide similar functionality, e.g., virtual hosting and mapping local addresses into a generic form as explained in cf/README. SMM:08-148 Sendmail Installation and Operation Guide NEWDB If set, use the new database package from Berkeley (from 4.4BSD). This package is sub- stantially faster than DBM or NDBM. If NEWDB and NDBM are both set, sendmail will read DBM files, but will create and use NEWDB files. NIS Include support for NIS. If set together with both NEWDB and NDBM, sendmail will create both DBM and NEWDB files if and only if an alias file includes the substring "/yp/" in the name. This is intended for compatibility with Sun Microsystems' mkalias program used on YP masters. NISPLUS Compile in support for NIS+. NETINFO Compile in support for NetInfo (NeXT sta- tions). LDAPMAP Compile in support for LDAP X500 queries. Requires libldap and liblber from the Umich LDAP 3.2 or 3.3 release or equivalent libraries for other LDAP libraries such as OpenLDAP. HESIOD Compile in support for Hesiod. MAP_NSD Compile in support for IRIX NSD lookups. MAP_REGEX Compile in support for regular expression matching. DNSMAP Compile in support for DNS map lookups in the sendmail.cf file. PH_MAP Compile in support for ph lookups. SASL Compile in support for SASL, a required com- ponent for SMTP Authentication support. STARTTLS Compile in support for STARTTLS. EGD Compile in support for the "Entropy Gather- ing Daemon" to provide better random data for TLS. TCPWRAPPERS Compile in support for TCP Wrappers. _PATH_SENDMAILCF The pathname of the sendmail.cf file. Sendmail Installation and Operation Guide SMM:08-149 _PATH_SENDMAILPID The pathname of the sendmail.pid file. SM_CONF_SHM Compile in support for shared memory, see section about "/var/spool/mqueue". MILTER Compile in support for contacting external mail filters built with the Milter API. There are also several compilation flags to indi- cate the environment such as "_AIX3" and "_SCO_unix_". See the sendmail/README file for the latest scoop on these flags. 6.2. Parameters in sendmail/conf.h Parameters and compilation options are defined in conf.h. Most of these need not normally be tweaked; common parameters are all in sendmail.cf. However, the sizes of certain primitive vectors, etc., are included in this file. The numbers following the parameters are their default value. This document is not the best source of informa- tion for compilation flags in conf.h - see sendmail/README or sendmail/conf.h itself. MAXLINE [2048] The maximum line length of any input line. If message lines exceed this length they will still be processed correctly; how- ever, header lines, configuration file lines, alias lines, etc., must fit within this limit. MAXNAME [256] The maximum length of any name, such as a host or a user name. MAXPV [256] The maximum number of parameters to any mailer. This limits the number of reci- pients that may be passed in one transac- tion. It can be set to any arbitrary number above about 10, since sendmail will break up a delivery into smaller batches as needed. A higher number may reduce load on your system, however. MAXQUEUEGROUPS [50] The maximum number of queue groups. SMM:08-150 Sendmail Installation and Operation Guide MAXATOM [1000] The maximum number of atoms (tokens) in a single address. For example, the address "eric@CS.Berkeley.EDU" is seven atoms. MAXMAILERS [25] The maximum number of mailers that may be defined in the configuration file. This value is defined in include/sendmail/sendmail.h. MAXRWSETS [200] The maximum number of rewriting sets that may be defined. The first half of these are reserved for numeric specification (e.g., ``S92''), while the upper half are reserved for auto-numbering (e.g., ``Sfoo''). Thus, with a value of 200 an attempt to use ``S99'' will succeed, but ``S100'' will fail. MAXPRIORITIES [25] The maximum number of values for the "Pre- cedence:" field that may be defined (using the P line in sendmail.cf). MAXUSERENVIRON [100] The maximum number of items in the user environment that will be passed to subor- dinate mailers. MAXMXHOSTS [100] The maximum number of MX records we will accept for any single host. MAXMAPSTACK [12] The maximum number of maps that may be "stacked" in a sequence class map. MAXMIMEARGS [20] The maximum number of arguments in a MIME Content-Type: header; additional arguments will be ignored. MAXMIMENESTING [20] The maximum depth to which MIME messages may be nested (that is, nested Message or Multipart documents; this does not limit the number of components in a single Mul- tipart document). MAXDAEMONS [10] The maximum number of sockets sendmail Sendmail Installation and Operation Guide SMM:08-151 will open for accepting connections on different ports. MAXMACNAMELEN [25] The maximum length of a macro name. A number of other compilation options exist. These specify whether or not specific code should be com- piled in. Ones marked with - are 0/1 valued. NETINET- If set, support for Internet protocol net- working is compiled in. Previous versions of sendmail referred to this as DAEMON; this old usage is now incorrect. Defaults on; turn it off in the Makefile if your system doesn't support the Internet proto- cols. NETINET6- If set, support for IPv6 networking is compiled in. It must be separately enabled by adding DaemonPortOptions settings. NETISO- If set, support for ISO protocol network- ing is compiled in (it may be appropriate to #define this in the Makefile instead of conf.h). NETUNIX- If set, support for UNIX domain sockets is compiled in. This is used for control socket support. LOG If set, the syslog routine in use at some sites is used. This makes an informational log record for each message processed, and makes a higher priority log record for internal system errors. STRONGLY RECOM- MENDED - if you want no logging, turn it off in the configuration file. MATCHGECOS- Compile in the code to do ``fuzzy match- ing'' on the GECOS field in /etc/passwd. This also requires that the MatchGECOS option be turned on. NAMED_BIND- Compile in code to use the Berkeley Inter- net Name Domain (BIND) server to resolve TCP/IP host names. NOTUNIX If you are using a non-UNIX mail format, you can set this flag to turn off special processing of UNIX-style "From " lines. SMM:08-152 Sendmail Installation and Operation Guide USERDB- Include the experimental Berkeley user information database package. This adds a new level of local name expansion between aliasing and forwarding. It also uses the NEWDB package. This may change in future releases. The following options are normally turned on in per- operating-system clauses in conf.h. IDENTPROTO- Compile in the IDENT protocol as defined in RFC 1413. This defaults on for all sys- tems except Ultrix, which apparently has the interesting "feature" that when it receives a "host unreachable" message it closes all open connections to that host. Since some firewall gateways send this error code when you access an unauthorized port (such as 113, used by IDENT), Ultrix cannot receive email from such hosts. SYSTEM5 Set all of the compilation parameters appropriate for System V. HASFLOCK- Use Berkeley-style flock instead of System V lockf to do file locking. Due to the highly unusual semantics of locks across forks in lockf, this should always be used if at all possible. HASINITGROUPS Set this if your system has the init- groups() call (if you have multiple group support). This is the default if SYSTEM5 is not defined or if you are on HPUX. HASUNAME Set this if you have the uname(2) system call (or corresponding library routine). Set by default if SYSTEM5 is set. HASGETDTABLESIZE Set this if you have the getdtablesize(2) system call. HASWAITPID Set this if you have the haswaitpid(2) system call. FAST_PID_RECYCLE Set this if your system can possibly reuse the same pid in the same second of time. SFS_TYPE The mechanism that can be used to get file system capacity information. The values Sendmail Installation and Operation Guide SMM:08-153 can be one of SFS_USTAT (use the ustat(2) syscall), SFS_4ARGS (use the four argument statfs(2) syscall), SFS_VFS (use the two argument statfs(2) syscall including <sys/vfs.h>), SFS_MOUNT (use the two argu- ment statfs(2) syscall including <sys/mount.h>), SFS_STATFS (use the two argument statfs(2) syscall including <sys/statfs.h>), SFS_STATVFS (use the two argument statfs(2) syscall including <sys/statvfs.h>), or SFS_NONE (no way to get this information). LA_TYPE The load average type. Details are described below. The are several built-in ways of computing the load average. Sendmail tries to auto-configure them based on imperfect guesses; you can select one using the cc option -DLA_TYPE=type, where type is: LA_INT The kernel stores the load average in the kernel as an array of long integers. The actual values are scaled by a factor FSCALE (default 256). LA_SHORT The kernel stores the load average in the kernel as an array of short integers. The actual values are scaled by a factor FSCALE (default 256). LA_FLOAT The kernel stores the load average in the kernel as an array of double precision floats. LA_MACH Use MACH-style load averages. LA_SUBR Call the getloadavg routine to get the load average as an array of doubles. LA_ZERO Always return zero as the load average. This is the fallback case. If type LA_INT, LA_SHORT, or LA_FLOAT is specified, you may also need to specify _PATH_UNIX (the path to your system binary) and LA_AVENRUN (the name of the variable containing the load average in the kernel; usually "_avenrun" or "avenrun"). 6.3. Configuration in sendmail/conf.c The following changes can be made in conf.c. SMM:08-154 Sendmail Installation and Operation Guide 6.3.1. Built-in Header Semantics Not all header semantics are defined in the configuration file. Header lines that should only be included by certain mailers (as well as other more obscure semantics) must be specified in the HdrInfo table in conf.c. This table contains the header name (which should be in all lower case) and a set of header control flags (described below), The flags are: H_ACHECK Normally when the check is made to see if a header line is compatible with a mailer, sendmail will not delete an existing line. If this flag is set, sendmail will delete even existing header lines. That is, if this bit is set and the mailer does not have flag bits set that intersect with the required mailer flags in the header definition in sendmail.cf, the header line is always deleted. H_EOH If this header field is set, treat it like a blank line, i.e., it will signal the end of the header and the beginning of the message text. H_FORCE Add this header entry even if one existed in the message before. If a header entry does not have this bit set, sendmail will not add another header line if a header line of this name already existed. This would nor- mally be used to stamp the message by everyone who handled it. H_TRACE If set, this is a timestamp (trace) field. If the number of trace fields in a message exceeds a preset amount the message is returned on the assumption that it has an aliasing loop. H_RCPT If set, this field contains recipient addresses. This is used by the -t flag to determine who to send to when it is collecting recipients from the message. H_FROM This flag indicates that this field specifies a sender. The order of these fields in the HdrInfo table specifies sendmail's preference for which field to return error messages to. Sendmail Installation and Operation Guide SMM:08-155 H_ERRORSTO Addresses in this header should receive error messages. H_CTE This header is a Content-Transfer- Encoding header. H_CTYPE This header is a Content-Type header. H_STRIPVAL Strip the value from the header (for Bcc:). Let's look at a sample HdrInfo specification: struct hdrinfo HdrInfo[] = { /* originator fields, most to least significant */ "resent-sender", H_FROM, "resent-from", H_FROM, "sender", H_FROM, "from", H_FROM, "full-name", H_ACHECK, "errors-to", H_FROM|H_ERRORSTO, /* destination fields */ "to", H_RCPT, "resent-to", H_RCPT, "cc", H_RCPT, "bcc", H_RCPT|H_STRIPVAL, /* message identification and control */ "message", H_EOH, "text", H_EOH, /* trace fields */ "received", H_TRACE|H_FORCE, /* miscellaneous fields */ "content-transfer-encoding", H_CTE, "content-type", H_CTYPE, NULL, 0, }; This structure indicates that the "To:", "Resent- To:", and "Cc:" fields all specify recipient addresses. Any "Full-Name:" field will be deleted unless the required mailer flag (indicated in the configuration file) is specified. The "Message:" and "Text:" fields will terminate the header; these are used by random dissenters around the network world. The "Received:" field will always be added, and can be used to trace messages. There are a number of important points here. First, header fields are not added automatically just because they are in the HdrInfo structure; they must be specified in the configuration file in SMM:08-156 Sendmail Installation and Operation Guide order to be added to the message. Any header fields mentioned in the configuration file but not men- tioned in the HdrInfo structure have default pro- cessing performed; that is, they are added unless they were in the message already. Second, the HdrInfo structure only specifies cliched process- ing; certain headers are processed specially by ad hoc code regardless of the status specified in HdrInfo. For example, the "Sender:" and "From:" fields are always scanned on ARPANET mail to deter- mine the sender[24]; this is used to perform the "return to sender" function. The "From:" and "Full-Name:" fields are used to determine the full name of the sender if possible; this is stored in the macro $x and used in a number of ways. 6.3.2. Restricting Use of Email If it is necessary to restrict mail through a relay, the checkcompat routine can be modified. This routine is called for every recipient address. It returns an exit status indicating the status of the message. The status EX_OK accepts the address, EX_TEMPFAIL queues the message for a later try, and other values (commonly EX_UNAVAILABLE) reject the message. It is up to checkcompat to print an error message (using usrerr) if the message is rejected. For example, checkcompat could read: ____________________ [24]Actually, this is no longer true in SMTP; this infor- mation is contained in the envelope. The older ARPANET pro- tocols did not completely distinguish envelope from header. Sendmail Installation and Operation Guide SMM:08-157 int checkcompat(to, e) register ADDRESS *to; register ENVELOPE *e; { register STAB *s; s = stab("private", ST_MAILER, ST_FIND); if (s != NULL && e->e_from.q_mailer != LocalMailer && to->q_mailer == s->s_mailer) { usrerr("No private net mail allowed through this machine"); return (EX_UNAVAILABLE); } if (MsgSize > 50000 && bitnset(M_LOCALMAILER, to->q_mailer)) { usrerr("Message too large for non-local delivery"); e->e_flags |= EF_NORETURN; return (EX_UNAVAILABLE); } return (EX_OK); } This would reject messages greater than 50000 bytes unless they were local. The EF_NORETURN flag can be set in e->e_flags to suppress the return of the actual body of the message in the error return. The actual use of this routine is highly dependent on the implementation, and use should be limited. 6.3.3. New Database Map Classes New key maps can be added by creating a class initialization function and a lookup function. These are then added to the routine setupmaps. The initialization function is called as xxx_map_init(MAP *map, char *args) The map is an internal data structure. The args is a pointer to the portion of the configuration file line following the map class name; flags and filenames can be extracted from this line. The ini- tialization function must return true if it suc- cessfully opened the map, false otherwise. The lookup function is called as xxx_map_lookup(MAP *map, char buf[], char **av, int *statp) The map defines the map internally. The buf has the input key. This may be (and often is) used SMM:08-158 Sendmail Installation and Operation Guide destructively. The av is a list of arguments passed in from the rewrite line. The lookup function should return a pointer to the new value. If the map lookup fails, *statp should be set to an exit status code; in particular, it should be set to EX_TEMPFAIL if recovery is to be attempted by the higher level code. 6.3.4. Queueing Function The routine shouldqueue is called to decide if a message should be queued or processed immedi- ately. Typically this compares the message priority to the current load average. The default definition is: bool shouldqueue(pri, ctime) long pri; time_t ctime; { if (CurrentLA < QueueLA) return false; return (pri > (QueueFactor / (CurrentLA - QueueLA + 1))); } If the current load average (global variable CurrentLA, which is set before this function is called) is less than the low threshold load average (option x, variable QueueLA), shouldqueue returns false immediately (that is, it should not queue). If the current load average exceeds the high thres- hold load average (option X, variable RefuseLA), shouldqueue returns true immediately. Otherwise, it computes the function based on the message prior- ity, the queue factor (option q, global variable QueueFactor), and the current and threshold load averages. An implementation wishing to take the actual age of the message into account can also use the ctime parameter, which is the time that the message was first submitted to sendmail. Note that the pri parameter is already weighted by the number of times the message has been tried (although this tends to lower the priority of the message with time); the expectation is that the ctime would be used as an "escape clause" to ensure that messages are eventually processed. Sendmail Installation and Operation Guide SMM:08-159 6.3.5. Refusing Incoming SMTP Connections The function refuseconnections returns true if incoming SMTP connections should be refused. The current implementation is based exclusively on the current load average and the refuse load average option (option X, global variable RefuseLA): bool refuseconnections() { return (RefuseLA > 0 && CurrentLA >= RefuseLA); } A more clever implementation could look at more system resources. 6.3.6. Load Average Computation The routine getla returns the current load average (as a rounded integer). The distribution includes several possible implementations. If you are porting to a new environment you may need to add some new tweaks.[25] 6.4. Configuration in sendmail/daemon.c The file sendmail/daemon.c contains a number of routines that are dependent on the local networking environment. The version supplied assumes you have BSD style sockets. In previous releases, we recommended that you modify the routine maphostname if you wanted to gen- eralize $[ ... $] lookups. We now recommend that you create a new keyed map instead. 6.5. LDAP In this section we assume that sendmail has been compiled with support for LDAP. 6.5.1. LDAP Recursion LDAP Recursion allows you to add types to the search attributes on an LDAP map specification. The syntax is: ____________________ [25]If you do, please send updates to sendmail@Sendmail.ORG. SMM:08-160 Sendmail Installation and Operation Guide -v ATTRIBUTE[:TYPE[:OBJECTCLASS[|OBJECTCLASS|...]]] The new TYPEs are: NORMAL This attribute type specifies the attri- bute to add to the results string. This is the default. DN Any matches for this attribute are expected to have a value of a fully qual- ified distinguished name. sendmail will lookup that DN and apply the attributes requested to the returned DN record. FILTER Any matches for this attribute are expected to have a value of an LDAP search filter. sendmail will perform a lookup with the same parameters as the original search but replaces the search filter with the one specified here. URL Any matches for this attribute are expected to have a value of an LDAP URL. sendmail will perform a lookup of that URL and use the results from the attri- butes named in that URL. Note however that the search is done using the current LDAP connection, regardless of what is specified as the scheme, LDAP host, and LDAP port in the LDAP URL. Any untyped attributes are considered NORMAL attri- butes as described above. The optional OBJECTCLASS (| separated) list contains the objectClass values for which that attribute applies. If the list is given, the attri- bute named will only be used if the LDAP record being returned is a member of that object class. Note that if these new value attribute TYPEs are used in an AliasFile option setting, it will need to be double quoted to prevent sendmail from misparsing the colons. Note that LDAP recursion attributes which do not ultimately point to an LDAP record are not con- sidered an error. 6.5.1.1. Example Since examples usually help clarify, here is an example which uses all four of the new types: Sendmail Installation and Operation Guide SMM:08-161 O LDAPDefaultSpec=-h ldap.example.com -b dc=example,dc=com Kexample ldap -z, -k (&(objectClass=sendmailMTAAliasObject)(sendmailMTAKey=%0)) -v sendmailMTAAliasValue,mail:NORMAL:inetOrgPerson, uniqueMember:DN:groupOfUniqueNames, sendmailMTAAliasSearch:FILTER:sendmailMTAAliasObject, sendmailMTAAliasURL:URL:sendmailMTAAliasObject That definition specifies that: + Any value in a sendmailMTAAliasValue attri- bute will be added to the result string regardless of object class. + The mail attribute will be added to the result string if the LDAP record is a member of the inetOrgPerson object class. + The uniqueMember attribute is a recursive attribute, used only in groupOfUniqueNames records, and should contain an LDAP DN point- ing to another LDAP record. The desire here is to return the mail attribute from those DNs. + The sendmailMTAAliasSearch attribute and sendmailMTAAliasURL are both used only if referenced in a sendmailMTAAliasObject. They are both recursive, the first for a new LDAP search string and the latter for an LDAP URL. 6.6. STARTTLS In this section we assume that sendmail has been compiled with support for STARTTLS. To properly under- stand the use of STARTTLS in sendmail, it is necessary to understand at least some basics about X.509 certi- ficates and public key cryptography. This information can be found in books about SSL/TLS or on WWW sites, e.g., "". 6.6.1. Certificates for STARTTLS When acting as a server, sendmail requires X.509 certificates to support STARTTLS: one as cer- tificate cer- tificates are sent to the client during the TLS SMM:08-162 Sendmail Installation and Operation Guide handshake (as part of the CertificateRequest) as the list of acceptable CAs. However, do not list too many root CAs in that file, otherwise the TLS handshake may fail; e.g., error:14094417:SSL routines:SSL3_READ_BYTES: sslv3 alert illegal parameter:s3_pkt.c:964:SSL alert number 47 You should probably put only the CA cert into that file that signed your own cert(s), or at least only those you trust. The CACertPath directory must con- tain the hashes of each CA certificate as filenames (or as links to them). Symbolic links can be gen- erated with the following two (Bourne) shell com- mands: C=FileName_of_CA_Certificate ln -s $C `openssl x509 -noout -hash < $C`.0 A better way to do this is to use the c_rehash com- mand that is part of the OpenSSL distribution because it handles subject hash collisions by incrementing the number in the suffix of the filename of the symbolic link, e.g., .0 to .1, and so on. An X.509 certificate is also required for authentication in client mode (ClientCertFile and corresponding private ClientKeyFile), however, sendmail will always use STARTTLS when offered by a server. The client and server certificates can be identical. Certificates can be obtained from a cer- tificate authority or created with the help of OpenSSL. The required format for certificates and private keys is PEM. To allow for automatic startup of sendmail, private keys (ServerKeyFile, ClientKeyFile) must be stored unencrypted. The keys are only protected by the permissions of the file system. Never make a private key available to a third party. 6.6.2. PRNG for STARTTLS use- ful random data. In this case, sendmail must be Sendmail Installation and Operation Guide SMM:08-163 modi- fied in the last 10 minutes before it is supposed to be used by sendmail the content is considered obsolete. One method for generating this file is: openssl rand -out /etc/mail/randfile -rand /path/to/file:...256 See the OpenSSL documentation for more information. In this case, the PRNG for TLS is only seeded with other random data if the DontBlameSendmail option InsufficientEntropy is set. This is most likely not sufficient for certain actions, e.g., generation of (temporary) keys. Please see the OpenSSL documentation or other sources for further information about certificates, their creation and their usage, the importance of a good PRNG, and other aspects of TLS. 6.7. Encoding of STARTTLS and AUTH related Macros Macros that contain STARTTLS and AUTH related data which comes from outside sources, e.g., all mac- ros containing information from certificates, are encoded to avoid problems with non-printable or spe- cial characters. The latter are '\', '<', '>', '(', ')', '"', '+', and ' '. All of these characters are replaced by their value in hexadecimal (line breaks have been inserted for readability). The macros which are subject to this encoding are {cert_subject}, {cert_issuer}, {cn_subject}, {cn_issuer}, as well as {auth_authen} and {auth_author}. 7. ACKNOWLEDGEMENTS I've worked on sendmail for many years, and many employers have been remarkably patient about letting me work on a large project that was not part of my official SMM:08-164 Sendmail Installation and Operation Guide Insti- tute contribu- tions: Claus Assmann John Beck, Hewlett-Packard & Sun Microsystems Keith Bostic, CSRG, University of California, Berkeley Andrew Cheng, Sun Microsystems Michael J. Corrigan, University of California, San Diego Bryan Costales, International Computer Science Institute & InfoBeat Par (Pell) Emanuelsson Craig Everhart, Transarc Corporation Per Hedeland, Ericsson Tom Ivar Helbekkmo, Norwegian School of Economics Kari Hurtta, Finnish Meteorological Institute Allan E. Johannesen, WPI Jonathan Kamens, OpenVision Technologies, Inc. Takahiro Kanbe, Fuji Xerox Information Systems Co., Ltd. Brian Kantor, University of California, San Diego John Kennedy, Cal State University, Chico Murray S. Kucherawy, HookUp Communication Corp. Bruce Lilly, Sony U.S. Karl London Motonori Nakamura, Ritsumeikan University & Kyoto University John Gardiner Myers, Carnegie Mellon University Neil Rickert, Northern Illinois University Gregory Neil Shapiro, WPI Eric Schnoebelen, Convex Computer Corp. Eric Wassenaar, National Institute for Nuclear and High Energy Physics, Amsterdam Randall Winchester, University of Maryland Christophe Wolfhugel, Pasteur Institute & Herve Schauer Consultants (Paris) Exactis.com, Inc. I apologize for anyone I have omitted, misspelled, misat- tributed, or otherwise missed. At this point, I suspect Sendmail Installation and Operation Guide SMM:08-165 that at least a hundred people have contributed code, and many more have contributed ideas, comments, and encouragement. I've tried to list them in the RELEASE_NOTES in the distribution directory. I appreciate their contribution as well.. APPENDIX A COMMAND LINE FLAGS Arguments must be presented with flags before addresses. The flags are: -Ax Select an alternative .cf file which is either sendmail.cf for -Am or submit.cf for -Ac. By default the .cf file is chosen based on the opera- tion mode. For -bm (default), -bs, and -t it is submit.cf if it exists, for all others it is sendmail.cf. -bx Set operation mode to x. Operation modes are: m Deliver mail (default) s Speak SMTP on input side a- ``Arpanet'' mode (get envelope sender information from header) d Run as a daemon in background D Run as a daemon in foreground t Run in test mode v Just verify addresses, don't collect or deliver i Initialize the alias database p Print the mail queue P Print overview over the mail queue (requires shared memory) h Print the persistent host status database H Purge expired entries from the persistent host status database -Btype Indicate body type. -Cfile Use a different configuration file. Sendmail runs as the invoking user (rather than root) when this flag is specified. -D logfile Send debugging output to the indicated logfile instead of stdout. -dlevel Set debugging level. -f addr The envelope sender address is set to addr. This address may also be used in the From: header if that header is missing during initial submission. ____________________ -Deprecated. SMM:08-166 Sendmail Installation and Operation Guide Sendmail Installation and Operation Guide SMM:08-167 The envelope sender address is used as the reci- pient for delivery status notifications and may also appear in a Return-Path: header. -F name Sets the full name of this user to name. -G When accepting messages via the command line, indicate that they are for relay (gateway) submis- sion. sendmail may complain about syntactically invalid messages, e.g., unqualified host names, rather than fixing them when this flag is set. sendmail will not do any canonicalization in this mode. -h cnt Sets the "hop count" to cnt. This represents the number of times this message has been processed by sendmail (to the extent that it is supported by the underlying networks). Cnt is incremented dur- ing processing, and if it reaches MAXHOP (currently 25) sendmail throws away the message with an error. -L tag Sets the identifier used for syslog. Note that this identifier is set as early as possible. How- ever, sendmail may be used if problems arise before the command line arguments are processed. -n Don't do aliasing or forwarding. -N notifications Tag all addresses being sent as wanting the indi- cated notifications, which consists of the word "NEVER" or a comma-separated list of "SUCCESS", "FAILURE", and "DELAY" for successful delivery, failure, and a message that is stuck in a queue somewhere. The default is "FAILURE,DELAY". -r addr An obsolete form of -f. -oxvalue Set option x to the specified value. These options are described in Section 5.6. -Ooption=value Set option to the specified value (for long form option names). These options are described in Sec- tion 5.6. -Mxvalue Set macro x to the specified value. -pprotocol Set the sending protocol. Programs are encouraged to set this. The protocol field can be in the form protocol:host to set both the sending protocol and SMM:08-168 Sendmail Installation and Operation Guide sending host. For example, "-pUUCP:uunet" sets the sending protocol to UUCP and the sending host to uunet. (Some existing programs use -oM to set the r and s macros; this is equivalent to using -p.) -qtime Try to process the queued up mail. If the time is given, a sendmail will start one or more processes to run through the queue(s) at the specified time interval to deliver queued mail; otherwise, it only runs once. Each of these processes acts on a workgroup. These processes are also known as work- group processes or WGP's for short. Each workgroup is responsible for controlling the processing of one or more queues; workgroups help manage the use of system resources by sendmail. Each workgroup may have one or more children concurrently pro- cessing queues depending on the setting of Max- QueueChildren. -qptime Similar to -q with a time argument, except that instead of periodically starting WGP's sendmail starts persistent WGP's that alternate between processing queues and sleeping. The sleep time is specified by the time argument; it defaults to 1 second, except that a WGP always sleeps at least 5 seconds if their queues were empty in the previous run. Persistent processes are managed by a queue control process (QCP). The QCP is the parent pro- cess of the WGP's. Typically the QCP will be the sendmail daemon (when started with -bd or -bD) or a special process (named Queue control) (when started without -bd or -bD). If a persistent WGP ceases to be active for some reason another WGP will be started by the QCP for the same workgroup in most cases. When a persistent WGP has core dumped, the debug flag no_persistent_restart is set or the specific persistent WGP has been res- tarted too many times already then the WGP will not be started again and a message will be logged to this effect. To stop (SIGTERM) or restart (SIGHUP) persistent WGP's the appropriate signal should be sent to the QCP. The QCP will propagate the signal to all of the WGP's and if appropriate restart the persistent WGP's. -qGname Run the jobs in the queue group name once. -q[!]Xstring Run the queue once, limiting the jobs to those matching Xstring. The key letter X can be I to limit based on queue identifier, R to limit based on recipient, S to limit based on sender, or Q to limit based on quarantine reason for quarantined Sendmail Installation and Operation Guide SMM:08-169 jobs. A particular queued job is accepted if one of the corresponding attributes contains the indi- cated string. The optional ! character negates the condition tested. Multiple -qX flags are permit- ted, with items with the same key letter "or'ed" together, and items with different key letters "and'ed" together. -Q[reason] Quarantine a normal queue items with the given reason or unquarantine quarantined queue items if no reason is given. This should only be used with some sort of item matching using -q[!]Xstring as described above. -R ret What information you want returned if the message bounces; ret can be "HDRS" for headers only or "FULL" for headers plus body. This is a request only; the other end is not required to honor the parameter. If "HDRS" is specified local bounces also return only the headers. -t Read the header for "To:", "Cc:", and "Bcc:" lines, and send to everyone listed in those lists. The "Bcc:" line will be deleted before sending. Any addresses in the argument vector will be deleted from the send list. -V envid The indicated envid is passed with the envelope of the message and returned if the message bounces. -X logfile Log all traffic in and out of sendmail in the indicated logfile for debugging mailer problems. This produces a lot of data very quickly and should be used sparing. APPENDIX B QUEUE FILE FORMATS This appendix describes the format of the queue files. These files live in a queue directory. The individual qf, hf,: Y Encoded year M Encoded month D Encoded day h Encoded hour m Encoded minute s Encoded second NN Encoded envelope number ppppp At least five decimal digits of the process ID All files with the same id collectively define one mes- sage. Due to the use of memory-buffered files, some of these files may never appear on disk. The types are: qf The queue control file. This file contains the informa- tion necessary to process the job. hf The same as a queue control file, but for a quarantined queue job. df The data file. The message body (excluding the header) is kept in this file. Sometimes the df file is not stored in the same directory as the qf file; in this case, the qf file contains a `d' record which names the queue directory that contains the df file. SMM:08-170 Sendmail Installation and Operation Guide Sendmail Installation and Operation Guide SMM:08-171 tf A temporary file. This is an image of the qf file when it is being rebuilt. It should be renamed to a qf file very quickly. xf A transcript file, existing during the life of a ses- sion showing everything that happens during that ses- sion. Sometimes the xf file must be generated before a queue group has been selected; in this case, the xf file will be stored in a directory of the default queue group. Qf A ``lost'' queue control file. sendmail renames a qf file to Qf if there is a severe (configuration) problem that cannot be solved without human intervention. Search the logfile for the queue file id to figure out what happened. After you resolved the problem, you can rename the Qf file to qf and send it again. The queue control file is structured as a series of lines each beginning with a code letter. The lines are as follows: V The version number of the queue file format, used to allow new sendmail binaries to read queue files created by older versions. Defaults to version zero. Must be the first line of the file if present. For 8.12 the version number is 6. A The information given by the AUTH= parameter of the "MAIL FROM:" command or $f@$j if sendmail has been called directly. H A header definition. There may be any number of these lines. The order is important: they represent the order in the final message. These use the same syntax as header definitions in the configuration file. C The controlling address. The syntax is "localuser:aliasname". Recipient addresses following this line will be flagged so that deliveries will be run as the localuser (a user name from the /etc/passwd file); aliasname is the name of the alias that expanded to this address (used for printing messages). q The quarantine reason for quarantined queue items. Q The ``original recipient'', specified by the ORCPT= field in an ESMTP transaction. Used exclusively for Delivery Status Notifications. It applies only to the following `R' line. r The ``final recipient'' used for Delivery Status Notif- ications. It applies only to the following `R' line. SMM:08-172 Sendmail Installation and Operation Guide R A recipient address. This will normally be completely aliased, but is actually realiased when the job is pro- cessed. There will be one line for each recipient. Ver- sion 1 qf files also include a leading colon-terminated list of flags, which can be `S' to return a message on successful final delivery, `F' to return a message on failure, `D' to return a message if the message is delayed, `B' to indicate that the body should be returned, `N' to suppress returning the body, and `P' to declare this as a ``primary'' (command line or SMTP-session) address. S The sender address. There may only be one of these lines. T The job creation time. This is used to compute when to time out the job. P The current message priority. This is used to order the queue. Higher numbers mean lower priorities. The prior- ity changes as the message sits in the queue. The ini- tial priority depends on the message class and the size of the message. M A message. This line is printed by the mailq command, and is generally used to store status information. It can contain any text. F Flag bits, represented as one letter per flag. Defined flag bits are r indicating that this is a response mes- sage and w indicating that a warning message has been sent announcing that the mail has been delayed. Other flag bits are: 8: the body contains 8bit data, b: a Bcc: header should be removed, d: the mail has RET parameters (see RFC 1894), n: the body of the message should not be returned in case of an error, s: the envelope has been split. N The total number of delivery attempts. K The time (as seconds since January 1, 1970) of the last delivery attempt. d If the df file is in a different directory than the qf file, then a `d' record is present, specifying the directory in which the df file resides. I The i-number of the data file; this can be used to recover your mail queue after a disastrous disk crash. $ A macro definition. The values of certain macros are passed through to the queue run phase. Sendmail Installation and Operation Guide SMM:08-173 B The body type. The remainder of the line is a text string defining the body type. If this field is miss- ing, the body type is assumed to be "undefined" and no special processing is attempted. Legal values are "7BIT" and "8BITMIME". Z The original envelope id (from the ESMTP transaction). For Deliver Status Notifications only. As an example, the following is a queue file sent to "eric@mammoth.Berkeley.EDU" and "bostic@okeeffe.CS.Berkeley.EDU"[1]: V4 T711358135 K904446490 N0 P2100941 $_eric@localhost ${daemon_flags} Seric Ceric:100:1000:sendmail@vangogh.CS.Berkeley.EDU RPFD:eric@mammoth.Berkeley.EDU RPFD:bostic@okeeffe.CS.Berkeley.EDU H?P?Return-path: <^g> H??Received: by vangogh.CS.Berkeley.EDU (5.108/2.7) id AAA06703; Fri, 17 Jul 1992 00:28:55 -0700 H??Received: from mail.CS.Berkeley.EDU by vangogh.CS.Berkeley.EDU (5.108/2.7) id AAA06698; Fri, 17 Jul 1992 00:28:54 -0700 H??Received: from [128.32.31.21] by mail.CS.Berkeley.EDU (5.96/2.5) id AA22777; Fri, 17 Jul 1992 03:29:14 -0400 H??Received: by foo.bar.baz.de (5.57/Ultrix3.0-C) id AA22757; Fri, 17 Jul 1992 09:31:25 GMT H?F?From: eric@foo.bar.baz.de (Eric Allman) H?x?Full-name: Eric Allman H??Message-id: <9207170931.AA22757@foo.bar.baz.de> H??To: sendmail@vangogh.CS.Berkeley.EDU H??Subject: this is an example message This shows the person who sent the message, the submission time (in seconds since January 1, 1970), the message prior- ity, the message class, the recipients, and the headers for the message. ____________________ [1]This example is contrived and probably inaccurate for your environment. Glance over it to get an idea; nothing can replace looking at what your own system generates. APPENDIX C SUMMARY OF SUPPORT FILES This is a summary of the support files that sendmail creates or generates. Many of these can be changed by edit- ing the sendmail.cf file; check there to find the actual pathnames. /usr/sbin/sendmail The binary of sendmail. /usr/bin/newaliases A link to /usr/sbin/sendmail; causes the alias database to be rebuilt. Running this program is completely equivalent to giving sendmail the -bi flag. /usr/bin/mailq Prints a listing of the mail queue. This program is equivalent to using the -bp flag to sendmail. /etc/mail/sendmail.cf The configuration file, in textual form. /etc/mail/helpfile The SMTP help file. /etc/mail/statistics A statistics file; need not be present. /etc/mail/sendmail.pid Created in daemon mode; it contains the process id of the current SMTP daemon. If you use this in scripts; use ``head -1'' to get just the first line; the second line contains the command line used to invoke the daemon, and later versions of sendmail may add more information to subsequent lines. /etc/mail/aliases The textual version of the alias file. /etc/mail/aliases.db The alias file in hash(3) format. /etc/mail/aliases.{pag,dir} The alias file in ndbm(3) format. SMM:08-174 Sendmail Installation and Operation Guide Sendmail Installation and Operation Guide SMM:08-175 /var/spool/mqueue The directory in which the mail queue(s) and tem- porary files reside. /var/spool/mqueue/qf* Control (queue) files for messages. /var/spool/mqueue/df* Data files. /var/spool/mqueue/tf* Temporary versions of the qf files, used during queue file rebuild. /var/spool/mqueue/xf* A transcript of the current session. SMM:08-176 Sendmail Installation and Operation Guide This page intentionally left blank; replace it with a blank sheet for double-sided output. Sendmail Installation and Operation Guide SMM:08-3 TABLE OF CONTENTS 1. BASIC INSTALLATION ................................ 7 1.1. Compiling Sendmail ........................... 7 1.1.1. Tweaking the Build Invocation ........... 7 1.1.2. Creating a Site Configuration File ...... 8 1.1.3. Tweaking the Makefile ................... 8 1.1.4. Compilation and installation ............ 9 1.2. Configuration Files .......................... 10 1.3. Details of Installation Files ................ 12 1.3.1. /usr/sbin/sendmail ...................... 12 1.3.2. /etc/mail/sendmail.cf ................... 12 1.3.3. /etc/mail/submit.cf ..................... 13 1.3.4. /usr/bin/newaliases ..................... 13 1.3.5. /usr/bin/hoststat ....................... 13 1.3.6. /usr/bin/purgestat ...................... 13 1.3.7. /var/spool/mqueue ....................... 13 1.3.8. /var/spool/clientmqueue ................. 14 1.3.9. /var/spool/mqueue/.hoststat ............. 14 1.3.10. /etc/mail/aliases* ..................... 15 1.3.11. /etc/rc or /etc/init.d/sendmail ........ 15 1.3.12. /etc/mail/helpfile ..................... 16 1.3.13. /etc/mail/statistics ................... 16 1.3.14. /usr/bin/mailq ......................... 16 1.3.15. sendmail.pid ........................... 18 1.3.16. Map Files .............................. 18 2. NORMAL OPERATIONS ................................. 19 2.1. The System Log ............................... 19 2.1.1. Format .................................. 19 2.1.2. Levels .................................. 20 2.2. Dumping State ................................ 21 2.3. The Mail Queues .............................. 21 2.3.1. Queue Groups and Queue Directories ...... 21 2.3.2. Queue Runs .............................. 22 2.3.3. Manual Intervention ..................... 23 2.3.4. Printing the queue ...................... 23 2.3.5. Forcing the queue ....................... 24 2.3.6. Quarantined Queue Items ................. 25 2.4. Disk Based Connection Information ............ 26 2.5. The Service Switch ........................... 27 2.6. The Alias Database ........................... 28 2.6.1. Rebuilding the alias database ........... 30 2.6.2. Potential problems ...................... 30 2.6.3. List owners ............................. 31 2.7. User Information Database .................... 31 2.8. Per-User Forwarding (.forward Files) ......... 31 2.9. Special Header Lines ......................... 32 2.9.1. Errors-To: .............................. 32 2.9.2. Apparently-To: .......................... 32 2.9.3. Precedence .............................. 33 2.10. IDENT Protocol Support ...................... 33 3. ARGUMENTS ......................................... 34 3.1. Queue Interval ............................... 34 SMM:08-4 Sendmail Installation and Operation Guide 3.2. Daemon Mode .................................. 35 3.3. Forcing the Queue ............................ 35 3.4. Debugging .................................... 36 3.5. Changing the Values of Options ............... 37 3.6. Trying a Different Configuration File ........ 37 3.7. Logging Traffic .............................. 37 3.8. Testing Configuration Files .................. 38 3.9. Persistent Host Status Information ........... 40 4. TUNING ............................................ 40 4.1. Timeouts ..................................... 41 4.1.1. Queue interval .......................... 41 4.1.2. Read timeouts ........................... 41 4.1.3. Message timeouts ........................ 44 4.2. Forking During Queue Runs .................... 45 4.3. Queue Priorities ............................. 46 4.4. Load Limiting ................................ 46 4.5. Resource Limits .............................. 47 4.6. Measures against Denial of Service Attacks ................................................. 47 4.7. Delivery Mode ................................ 48 4.8. Log Level .................................... 49 4.9. File Modes ................................... 50 4.9.1. To suid or not to suid? ................. 50 4.9.2. Turning off security checks ............. 51 4.10. Connection Caching .......................... 54 4.11. Name Server Access .......................... 55 4.12. Moving the Per-User Forward Files ........... 57 4.13. Free Space .................................. 57 4.14. Maximum Message Size ........................ 58 4.15. Privacy Flags ............................... 58 4.16. Send to Me Too .............................. 58 5. THE WHOLE SCOOP ON THE CONFIGURATION FILE ......... 58 5.1. R and S -- Rewriting Rules ................... 59 5.1.1. The left hand side ...................... 60 5.1.2. The right hand side ..................... 60 5.1.3. Semantics of rewriting rule sets ........ 62 5.1.4. Ruleset hooks ........................... 64 5.1.4.1. check_relay ........................ 64 5.1.4.2. check_mail ......................... 65 5.1.4.3. check_rcpt ......................... 65 5.1.4.4. check_data ......................... 65 5.1.4.5. check_compat ....................... 65 5.1.4.6. check_eoh .......................... 65 5.1.4.7. check_eom .......................... 66 5.1.4.8. check_etrn ......................... 66 5.1.4.9. check_expn ......................... 66 5.1.4.10. check_vrfy ........................ 66 5.1.4.11. trust_auth ........................ 67 5.1.4.12. tls_client ........................ 67 5.1.4.13. tls_server ........................ 67 5.1.4.14. tls_rcpt .......................... 67 5.1.4.15. srv_features ...................... 68 5.1.4.16. try_tls ........................... 69 Sendmail Installation and Operation Guide SMM:08-5 5.1.4.17. authinfo .......................... 69 5.1.4.18. queuegroup ........................ 69 5.1.4.19. greet_pause ....................... 70 5.1.5. IPC mailers ............................. 70 5.2. D -- Define Macro ............................ 71 5.3. C and F -- Define Classes .................... 82 5.4. M -- Define Mailer ........................... 84 5.5. H -- Define Header ........................... 93 5.6. O -- Set Option .............................. 94 5.7. P -- Precedence Definitions .................. 126 5.8. V -- Configuration Version Level ............. 127 5.9. K -- Key File Declaration .................... 129 5.10. Q -- Queue Group Declaration ................ 141 5.11. X -- Mail Filter (Milter) Definitions ....... 143 5.12. The User Database ........................... 144 5.12.1. Structure of the user database ......... 145 5.12.2. User database semantics ................ 146 5.12.3. Creating the database[23] .............. 147 6. OTHER CONFIGURATION ............................... 147 6.1. Parameters in devtools/OS/$oscf .............. 147 6.2. Parameters in sendmail/conf.h ................ 149 6.3. Configuration in sendmail/conf.c ............. 153 6.3.1. Built-in Header Semantics ............... 154 6.3.2. Restricting Use of Email ................ 156 6.3.3. New Database Map Classes ................ 157 6.3.4. Queueing Function ....................... 158 6.3.5. Refusing Incoming SMTP Connections ...... 159 6.3.6. Load Average Computation ................ 159 6.4. Configuration in sendmail/daemon.c ........... 159 6.5. LDAP ......................................... 159 6.5.1. LDAP Recursion .......................... 159 6.5.1.1. Example ............................ 160 6.6. STARTTLS ..................................... 161 6.6.1. Certificates for STARTTLS ............... 161 6.6.2. PRNG for STARTTLS ....................... 162 6.7. Encoding of STARTTLS and AUTH related Mac- ros ............................................. 163 7. ACKNOWLEDGEMENTS .................................. 163 Appendix A. COMMAND LINE FLAGS ....................... 166 Appendix B. QUEUE FILE FORMATS ....................... 170 Appendix C. SUMMARY OF SUPPORT FILES ................. 174 SMM:08-6 Sendmail Installation and Operation Guide This page intentionally left blank; replace it with a blank sheet for double-sided output..
https://www.mirbsd.org/htman/i386/manSMM/08.sendmailop.htm
CC-MAIN-2015-40
refinedweb
45,291
64.3
CHANGES file for XML::Filter::Dispatcher 0.46 Fri Jan 10 09:13:37 EST 2003 - Updated SYNOPSIS, it was badly out of date and tripped up t@tomacorp.com, as seen on perlmonks (thanks, Matt). - Get '@node:*' => [ 'string()' => sub {} ] working, along with other related expressions. 0.45 Fri Jan 3 14:56:23 EST 2003 - Replace xset_fallthrough() with the much more flexible xrun_next_action(). The latter is a more informative name, too. - xset() now croaks when overwriting a defined value. - Added xoverwrite() to allow defined values to be overwritten. - Empty Rules lists now work (ie do nothing). - Unbuggered postponements a bit. See t/postponements.t for a couple of known failures (commented out). 0.44 Tue Dec 31 10:40:20 EST 2002 - Added bin/xfd_dump - xvalue now defaults to $_[1] (the sax data structure) if the rule was a matching expression. - added xvaluetype() (NEEDS TESTS!) - Handle default namespace more gracefully. Added t/namespaces.t - implemented xset_fallthrough(). 0.43 Tue Dec 31 00:48:16 EST 2002 - Allow '@*' => [ 'string()' => \&foo ] rules to work. - Allow 'end::foo' rules to work - Add tracing support to xstack directives. - The xstack is now unwound after every non-start_ event. - start_element and end_element no longer accidently hide events from the xstack maintenance code. 0.42 - Add XML::SAX::EventMethodMaker to PREREQ_PM - Added xadd and xset. 0.41 Fri Dec 20 09:54:02 EST 2002 - Fix attribute ordering sensitivity on perl's hash algorithm. This gets the test suite to pass and might help somebody somewhere's production code to operate in a predictable fashion across perl versions. - string( * ) now compiles (and works :) - get xstack synced with the order events. - add t/builder.t 0.4 Thu Dec 12 06:32:24 EST 2002 - Major rewrite. Now supports most of XPath plus EventPath goodies.
https://web-stage.metacpan.org/release/RBS/XML-Filter-Dispatcher-0.46/changes
CC-MAIN-2021-49
refinedweb
300
69.28
In this installment we'll train a tank commander. I'm sharing a bunch of the code for this project here Previously, I reviewed core concepts in Reinforcement Learning (RL) and introduced important parts of the OpenAI Gym API. You can review that introduction here: ...is somewhat painful. Many online notebook services like colab and Kaggle don't allow you to install some of the OpenAI environments, so I'm going to stick to Atari for now. If you're interested in trying to set up OpenAI gym with more flexibility, you might start with this interesting write-up. In order to write agents that actually take the game screen into account when making decisions, we'll need to update our run_job utility from last time: action = model.decision_function(obs=observation, env=env) And our RandomAgent will need to be modified accordingly: class RandomAgentContainer: """A model that takes random actions and doesn't learn""" def __init__(self): pass def decision_function(self, obs=None, env=None): if env: return env.action_space.sample() else: return 0 def train_on_history(self, history): pass model = RandomAgentContainer() Now we can use our RandomAgent to explore all the information that our job creates. result = run_job("Robotank-v0", model, 10000, episodes_per_train=0); result.keys() Output dict_keys(['history', 'env', 'parameters']) Our job produced video of the game being played, as well as a history of images, actions, predictions, and rewards. It also saved the environment object from OpenAI Gym. Let's begin by trying to understand our observations in a video: render_video(0, result['env']); If, like me, you have never played Robot Tank on an Atari... you can read the manual! You can learn about the heads up display and more. So anyway, it looks like the image has a bunch of noise in it. Let's see if we can extract any of that... import matplotlib from matplotlib.image import imread from matplotlib.pyplot import imshow observation_sample = [im['observation'] for im in result.get('history')] imshow(observation_sample[0]) So using the ticks on the axes and with a little trial and error we can find the bounding boxes for the radar panel and the periscope. imshow(observation_sample[0][139:172, 60:100, :]) <matplotlib.image.AxesImage at 0x7f48ad329b38> So, we can certainly crop this image and worry less about the noise... radar_bounding_box = ((139, 172), (60, 100), (None)) From reading the manual we know that one of the four indicators bracketing this radar display is "R" for "radar." In other words, we' can't rely on radar as the only input, because all of those indicators represent subsystems that can be disabled. Let's also take a bounding box for the periscope: imshow(observation_sample[0][80:124, 12:160, :]) <matplotlib.image.AxesImage at 0x7f48ad693b38> peri_bounding_box=((80, 124), (12, 160), (None)) Let's also check the info field because it sometimes has observation data. result.get('history')[0].get('info') Output {'ale.lives': 4} I'm going to intentionally ignore the V, C, R, T boxes and we can always reintrouduce them later if we think a performance gain is in the offing. You saw in the manual how they work so I don't think it's a cause for concern... Let's also try to understand the action space. env = result['env'] print(env.action_space) Output: Discrete(18) Hmm...not helpful at all. But that's what you'd naturally think to do... It turns out that extracting action meanings has its own namespace in the gymAPI. # env.unwrapped.get_action_meanings() Output ['NOOP', 'FIRE', 'UP', 'RIGHT', 'LEFT', 'DOWN', 'UPRIGHT', 'UPLEFT', 'DOWNRIGHT', 'DOWNLEFT', 'UPFIRE', 'RIGHTFIRE', 'LEFTFIRE', 'DOWNFIRE', 'UPRIGHTFIRE', 'UPLEFTFIRE', 'DOWNRIGHTFIRE', 'DOWNLEFTFIRE'] So "NOOP" presumably means "no-op" i.e. "do nothing." The rest apparently constitute all the permutations of actions available to the client. This is what we would expect the action space to be. These are also discrete actions so we can code our model to take exactly one action per step. Finally, let's visualize the rewards set([r['reward'] for r in result['history']]) # Unique rewards across history Output {0.0, 1.0} import matplotlib.pyplot as plt plt.plot([r['reward'] for r in result['history']]) Looks like the reward function is simply "score a hit=1 else 0". We can confirm by visualizing the observations at reward time. # Observations when reward was given reward_incidents = list(filter(lambda i: i['reward'], result['history'])) i = 0 imshow(reward_incidents[i]['observation']) i = 1 imshow(reward_incidents[i]['observation']) i = 2 imshow(reward_incidents[i]['observation']) They're all images that seem to be captured right after the tank scores a hit. The TankCommander agent needs to learn how to decide which action to take. So, we first need to give it a mechanism for learning. In this case, we're going to use a special kind of graph. In this graph there are three kinds of nodes: The nodes are organized into "layers" that can share many connections and have a function in common for how they decide to aggregate and send signals. Now, I've simplified a lot, but the graph that we're talking about, if properly organized, is a deep learning neural net. To create one we can use tensorflow like so: import tensorflow as tf from tensorflow.keras import datasets, layers, models, preprocessing, callbacks import numpy as np import random inputs = layers.Input(shape=(44, 148, 3))) model.compile(optimizer='sgd', loss='mae', metrics=['accuracy']) Let's break it down: inputs = layers.Input(shape=(44, 148, 3)) You'll recall that this is the dimensions for one of our periscope images. These map to our input nodes.) This establishes layers of regular nodes in our graph and their interactions. I'm oversimplifying. model.compile(optimizer='sgd', loss='mae', metrics=['accuracy']) Finally we compile the model, giving it some special parameters. To teach our graph to drive a tank, we need to call the fit method e.g. model.fit(data, correct_prediction). Each time this happens, our model goes back and updates the strength (aka the 'weight') of the regular connections. The fancy name for this is "backpropagation." Anyway, the parameters of model.compile help determine how backpropagation is executed. Now that we have a brain, the brain needs experiences to train on. As we know from our manual and our little exploration of the data, each episode maps to one full game of Robot Tank. However, in each game the player gets several tanks and if a tank is destroyed, the player simply "respawns" in a new tank. So the episode is actually not the smallest unit of experience, rather, it's the in-game "life" of a single tank. We can extract this from: episodes = [list(filter(lambda h: h["episode"]==e , history) ) for e in range(n_episodes) ] game_lives = [ list( filter(lambda h: h.get('info').get('ale.lives')==l, episode) ) for l in range(5) ] For each of these lives we can get a cumulative reward (how many hits scored before the life ends.) rewards = [obs.get('reward') for obs in game_life] cum_rewards = sum(rewards) And using this number we can determine how strongly we want our brain to react to this experience # Positive experience if cum_rewards: nudge = cum_rewards * 0.03 # Negative experience else: nudge = 0 - 0.03 Now, for a given step, we can: action, prediction, and periscope imageas data nudgeour predictiononly at the actionindex, since we can only learn from the actions we have taken. model.fit(image, prediction_with_nudge) To visualize the problem with this, imagine you are tasked with riding a bike down a mountain blindfolded. As you miraculously ride down the mountain without killing yourself, you may reach a point where you seem to have reached the bottom. In any direction you try to go, you must pedal uphill. You take your blindfold off only to realize that you've barely gone a mile, and that you still have far to go before you reach the base of the mountain. The fancy term for this is a "local minimum." To address this we can just force the commander to randomly take actions sometimes: for obs in game_life: action, prediction = obs.get('action') if self.epsilon and (random.uniform(0, 1.0) < self.epsilon): action = random.randrange(18) # Copy update = list(prediction) # Update only the target action update[0][action] = update[0][action] + displacement With only 120k steps, our tank already seemed decided on a strategy as shown in this long, long gif. As you can see, turning left is powerful in Robotank -- a whole squadron killed! ...but then it dies. I think this is pretty strong for a stumpy model trained only on the periscope viewport. Time permitting, I may continue to tinker with this one -- increasing the epsilon value, tinkering with the graph parameters, and adding views, all could help nudge the tank commander towards a more nuanced strategy. Anyway you've learned a bit about implementing DL with RL in Python. You learned: Thanks for reading! If you liked this post, share it with all of your programming buddies! Further reading ☞ Complete Python Bootcamp: Go from zero to hero in Python 3 ☞ Python and Django Full Stack Web Developer Bootcamp ☞ Python for Time Series Data Analysis ☞ Python Programming For Beginners From Scratch ☞ Beginner’s guide on Python: Learn python from scratch! (New) ☞ Python for Beginners: Complete Python Programming ☞ Django 2.1 & Python | The Ultimate Web Development Bootcamp ☞ Python eCommerce | Build a Django eCommerce Web Application ☞ Python Django Dev To Deployment ☞ When Django Is Better Than JavaScript Frameworks for Business Processes Originally published on. Looking for an attractive & user-friendly web developer? HourlyDeveloper.io, a leading web, and mobile app development company, offers web developers for hire through flexible engagement models. You can **[Hire Web... After analyzing clients and market requirements, TopDevelopers has come up with the list of the best Python service providers. These top-rated Python developers are widely appreciated for their professionalism in handling diverse projects. When...
https://morioh.com/p/e7a27ccc1c88
CC-MAIN-2020-40
refinedweb
1,651
56.15
Scala Quick Guide Scala - Overview Scala, short for Scalable Language, is a hybrid functional programming language. It was created by Martin Odersky and it was first released in 2003. Scala smoothly integrates features of object-oriented and functional languages and Scala is compiled to run on the Java Virtual Machine. Many existing companies, who depend on Java for business critical applications, are turning to Scala to boost their development productivity, applications scalability and overall reliability. Here is the important list of features, which make Scala a first choice of because's in Scala, and also your own, custom Java classes, or your favourite Java open source projects. Scala vs Java: Scala has a set of features, which differ from Java. Some of these are: All types are objects. Type inference. Nested Functions. Functions are objects. Domain specific language (DSL) support. Traits. Closures. Concurrency support inspired by Erlang. Scala Web Frameworks: Scala is being used everywhere and importantly in enterprise web applications. You can check few of the most popular Scala web frameworks: Scala Environment Setup The Scala language can be installed on any UNIX-like or Windows system. Before you start installing Scala on your machine, you must have Java 1.5 or greater installed on your computer. Installing Scala on Windows: Step (1): JAVA Setup: First, you must set the JAVA_HOME environment variable and add the JDK's bin directory to your PATH variable. To verify if everything is fine, at command prompt, type java -version and press Enter. You should see something like the following: C:\>java -version java version "1.6.0_15" Java(TM) SE Runtime Environment (build 1.6.0_15-b03) Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02, mixed mode) C:\> Next, test to see that the Java compiler is installed. Type javac -version. You should see something like the following: C:\>javac -version javac 1.6.0_15 C:\> Step (2): Scala Setup: Next, you can download Scala from. At the time of writing this tutorial I downloaded scala-2.9.0.1-installer.jar and put it in C:/> directory. Make sure you have admin privilege to proceed. Now execute the following command at command prompt: C:\>java -jar scala-2.9.0.1-installer.jar C:\> Above command will display an installation wizard, which will guide you to install scala on your windows machine. During installation, it will ask for license agreement, simply accept it and further it will ask a path where scala will be installed. I selected default given path C:\Program Files\scala, you can select a suitable path as per your convenience. Finally, open a new command prompt and type scala -version and press Enter. You should see the following: C:\>scala -version Scala code runner version 2.9.0.1 -- Copyright 2002-2011, LAMP/EPFL C:\> Congratulations, you have installed Scala on your Windows machine. Next section will teach you how to install scala on your Mac OS X and Unix/Linux machines. Installing Scala on Mac OS X and Linux Step (1): JAVA Setup: Make sure you have got the Java JDK 1.5 or greater installed on your computer and set JAVA_HOME environment variable and add the JDK's bin directory to your PATH variable. To verify if everything is fine, at command prompt, type java -version and press Enter. You should see something like the following: $java -version java version "1.5.0_22" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_22-b03) Java HotSpot(TM) Server VM (build 1.5.0_22-b03, mixed mode) $ Next, test to see that the Java compiler is installed. Type javac -version. You should see something like the following: $javac -version javac 1.5.0_22 javac: no source files Usage: javac <options> <source files> ................................................ $ Step (2): Scala Setup: Next, you can download Scala from. At the time of writing this tutorial I downloaded scala-2.9.0.1-installer.jar and put it in /tmp directory. Make sure you have admin privilege to proceed. Now, execute the following command at command prompt: $java -jar scala-2.9.0.1-installer.jar Welcome to the installation of scala 2.9.0.1! The homepage is at: press 1 to continue, 2 to quit, 3 to redisplay 1 ................................................ [ Starting to unpack ] [ Processing package: Software Package Installation (1/1) ] [ Unpacking finished ] [ Console installation done ] $ During installation, it will ask for license agreement, to accept it type 1 and it will ask a path where scala will be installed. I entered /usr/local/share, you can select a suitable path as per your convenience. Finally, open a new command prompt and type scala -version and press Enter. You should see the following: $scala -version Scala code runner version 2.9.0.1 -- Copyright 2002-2011, LAMP/EPFL $ Congratulations, you have installed Scala on your UNIX/Linux machine. Scala Basic Syntax If you have good understanding on Java, then it will be very easy for you to learn Scala. The biggest syntactic difference between Scala and Java is that the ; line end character is optional. When we consider a Scala.]) - Scala program processing starts from the main() method which is a mandatory part of every Scala Program. Scala Identifiers: All Scala components require names. Names used for objects, classes, variables and methods are called identifiers. A keyword cannot be used as an identifier and identifiers are case-sensitive.: + ++ ::: <?> :> The Scala compiler will internally "mangle" operator identifiers to turn them into legal Java identifiers with embedded $ characters. For instance, the identifier :-> would be represented internally as $colon$minus$greater. Mixed identifiers A mixed identifier consists of an alphanumeric identifier, which is followed by an underscore and an operator identifier. Following are legal mixed identifiers: unary_+, myvar_= Here, unary_+ used as a method name defines a unary + operator and myvar_= used as method name defines an assignment operator. Literal identifiers A literal identifier is an arbitrary string enclosed in back ticks (` . . . `). Following are legal literal identifiers: `x` `<clinit>` `yield` Scala Keywords:!") } } Blank Lines and Whitespace: A line containing only whitespace, possibly with a comment, is known as a blank line, and Scala totally ignores it. Tokens may be separated by whitespace characters and/or comments. Newline Characters:: val s = "hello"; println(s)} Scala Data Types Scala has all the same data types as Java, with the same memory footprint and precision. Following is the table giving details about all the data types available in Scala:. Integer Literals Integer literals are usually of type Int, or of type Long when followed by a L or l suffix. Here are some integer literals: 0 035 21 0xFFFFFFFF 0777L Floating Point Literals Floating point literals are of type Float when followed by a floating point type suffix F or f, and are of type Double otherwise. Here are some floating point literals:: 'a' '\u0041' '\n' '\t' String Literals A string literal is a sequence of characters in double quotes. The characters are either printable unicode character or are described by escape sequences. Here are some string literals: "Hello,\nWorld!" "This string contains a \" character." Multi-Line Strings A multi-line string literal is a sequence of characters enclosed in triple quotes """ ... """. The sequence of characters is arbitrary, except that it may contain three or more consuctive quote characters only at the very end. Characters must not necessarily be printable; newlines or other control characters are also permitted. Here is a multi-line string literal: """the present string spans three lines.""" The Null Value: object Test { def main(args: Array[String]) { println("Hello\tWorld\n\n" ); } } When the above code is compiled and executed, it produces the following result: Hello World Scala Variables Variables. Variable Declaration Scala has the :Int; val myVal :String; Variable Type Inference: When you assign an initial value to a variable, the Scala compiler can figure out the type of the variable based on the value assigned to it. This is called variable type inference. Therefore, you could write these variable declarations like this:: Variables in Scala can have three different scopes depending on the place where they are being used. They can exist as fields, as method parameters and as local variables. Below are the details about each type of scope: Fields: Fields are variables that belong to an object. The fields are accessible from inside every method in the object. Fields can also be accessible outside the object depending on what access modifiers the field is declared with. Object fields can be both mutable or immutable types and can be defined using either var or val. Method Parameters:. Sc). Scala Operators An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations. Scala is rich in built-in operators and provides following type of operators: Arithmetic Operators Relational Operators Logical Operators Bitwise Operators Assignment Operators This chapter will examine the arithmetic, relational, logical, bitwise, assignment and other operators one by one. Arithmetic Operators: There are following arithmetic operators supported by Scala language: Assume variable A holds 10 and variable B holds 20, then: Relational Operators: There are following relational operators supported by Scala language Assume variable A holds 10 and variable B holds 20, then: Logical Operators: There are following logical operators supported by Scala Scala language is listed in the following table. Assume variable A holds 60 and variable B holds 13, then: Assignment Operators: There are following assignment operators supported by Scala language: Operators Precedence in Scala:. Scala IF...ELSE Statements Following is the general form of a typical decision making IF...ELSE structure found in most of the programming languages: The if Statement: An if statement consists of a Boolean expression followed by one or more statements. Syntax: The syntax of an if statement is: if(Boolean_expression) { // Statements will execute if the Boolean expression is true } If the boolean expression evaluates to true then the block of code inside the if statement will be executed. If not, the first set of code after the end of the if statement (after the closing curly brace) will be executed. Example: object Test { def main(args: Array[String]) { var x = 10; if( x < 20 ){ println("This is if statement"); } } } This would produce following result: C:/>scalac Test.scala C:/>scala Test This is if statement C:/> The if...else Statement: An if statement can be followed by an optional else statement, which executes when the Boolean expression is false. Syntax: The syntax of a if...else is: if(Boolean_expression){ //Executes when the Boolean expression is true }else{ //Executes when the Boolean expression is false } Example: object Test { def main(args: Array[String]) { var x = 30; if( x < 20 ){ println("This is if statement"); }else{ println("This is else statement"); } } } This would produce the following result: C:/>scalac Test.scala C:/>scala Test This is else statement C:/> The if...else if...else Statement: An if statement can be followed by an optional else if...else statement, which is very useful if...else: object Test { def main(args: Array[String]) { var x = 30; if( x == 10 ){ println("Value of X is 10"); }else if( x == 20 ){ println("Value of X is 20"); }else if( x == 30 ){ println("Value of X is 30"); }else{ println("This is else statement"); } } } This would produce the following result: C:/>scalac Test.scala C:/>scala Test Value of X is 30 C:/> Nested if...else Statement: It is always legal to nest if-else statements, which means you can use one if or else if statement inside another if or else if statement. Syntax: The syntax for a nested if...else is as follows: if(Boolean_expression 1){ //Executes when the Boolean expression 1 is true if(Boolean_expression 2){ //Executes when the Boolean expression 2 is true } } You can nest else if...else in the similar way as we have nested if statement. Example: object Test { def main(args: Array[String]) { var x = 30; var y = 10; if( x == 30 ){ if( y == 10 ){ println("X = 30 and Y = 10"); } } } } This would produce the following result: C:/>scalac Test.scala C:/>scala Test X = 30 and Y = 10 C:/> Scala<<. Scala Functions A function is a group of statements that together: def functionName ([list of parameters]) : [return type] Methods are implicitly declared abstract if you leave off the equals sign and method body. The enclosing type is then itself abstract. Function Definitions: A scala function definition has the following form:: object add{ def addInt( a:Int, b:Int ) : Int = { var sum:Int = 0 sum = a + b return sum } } A function, which does not return anything, can return Unit which is equivalent to void in Java and indicates that function does not return anything. The functions which do not return anything in Scala, they are called procedures. Following is the syntax object Hello{ def printMe( ) : Unit = { println("Hello, Scala!") } } Calling Functions: Scala provides a number of syntactic variations for invoking methods. Following is the standard way to call a method: functionName( list of parameters ) If function is being called using an instance of the object then we would use dot notation similar to Java as follows: [instance.]functionName( list of parameters ) Following is the final example to define and then call the same function: object Test { def main(args: Array[String]) { println( "Returned Value : " + addInt(5,7) ); } def addInt( a:Int, b:Int ) : Int = { var sum:Int = 0 sum = a + b return sum } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Returned Value : 12 C:/> Scala functions are the heart of Scala programming and that's why Scala is assumed as a functional programming language. Following are few important concepts related to Scala functions which should be understood by a Scala programmer. Scala Closures A closure is a function, whose return value depends on the value of one or more variables declared outside this function. Consider the following piece of code with anonymous function: val multiplier = (i:Int) => i * 10 Here the only variable used in the function body, i * 0, is i, which is defined as a parameter to the function. Now let us take another piece of code: val multiplier = (i:Int) => i * factor There are two free variables in multiplier: i and factor. One of them, i, is a formal parameter to the function. Hence, it is bound to a new value each time multiplier is called. However, factor is not a formal parameter, then what is this? Let us add one more line of code: var factor = 3 val multiplier = (i:Int) => i * factor Now, factor has a reference to a variable outside the function but in the enclosing scope. Let us try the following example: object Test { def main(args: Array[String]) { println( "muliplier(1) value = " + multiplier(1) ) println( "muliplier(2) value = " + multiplier(2) ) } var factor = 3 val multiplier = (i:Int) => i * factor } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test muliplier(1) value = 3 muliplier(2) value = 6 C:/> Above function references factor and reads its current value each time. If a function has no external references, then it is trivially closed over itself. No external context is required. Scala Strings Consider the following simple example where we assign a string in a variable of type val: object Test { val greeting: String = "Hello, world!" def main(args: Array[String]) { println( greeting ) } } Here, the type of the value above is java.lang.String borrowed from Java, because Scala strings are also Java strings. It is very good point to note that every Java class is available in Scala. As such, Scala does not have a String class and makes use of Java Strings. So this chapter has been written keeping Java String as a base. In Scala, as in Java, a string is an immutable object, that is, an object that cannot be modified. On the other hand, objects that can be modified, like arrays, are called mutable objects. Since strings are very useful objects, in the rest of this section, we present the most important methods class java.lang.String defines. Creating Strings: The most direct way to create a string is to write: var greeting = "Hello world!"; or var greeting:String = "Hello world!"; Whenever it encounters a string literal in your code, the compiler creates a String object with its value, in this case, "Hello world!', but if you like, you can give String keyword as I have shown you in alternate declaration. object Test { val greeting: String = "Hello, world!" def main(args: Array[String]) { println( greeting ) } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Hello, world! C:/> As I mentioned earlier, String class is immutable, so that once it is created a String object cannot be changed. If there is a necessity to make a lot of modifications to Strings of characters then you should use String Builder Class available in Scala itself.: object Test { def main(args: Array[String]) { var palindrome = "Dot saw I was Tod"; var len = palindrome.length(); println( "String Length is : " + len ); } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test String Length is : 17 C:/> Concatenating Strings:("Zara"); Strings are more commonly concatenated with the + operator, as in: "Hello," + " world" + "!" Which results in: "Hello, world!" Let us look at the following example: object Test { def main(args: Array[String]) { var str1 = "Dot saw I was "; var str2 = "Tod"; println("Dot " + str1 + str2); } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Dot Dot saw I was Tod C:/> Creating Format Strings: You have printf() and format() methods to print output with formatted numbers. The String class has an equivalent class method, format(), that returns a String object rather than a PrintStream object. Let us look at the following example, which makes use of printf() method: object Test {) } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test The value of the float variable is 12.456000, while the value of the integer var iable is 2000, and the string is Hello, Scala!() C:/> String Methods: Following is the list of methods defined by java.lang.String class and can be used directly in your Scala programs: Scala Arrays Scala. The index of the first element of an array is the number zero and the index of the last element is the total number of elements minus one. Declaring Array Variables: To use an array in a program, you must declare a variable to reference the array and you must specify the type of array the variable can reference. Here is the syntax for declaring an array variable: var z:Array[String] = new Array[String](3) or var z = new Array[String](3) Here, z is declared as an array of Strings that may hold up to three elements. You can assign values to individual elements or get access to individual elements, it can be done by using commands like the following: z(0) = "Zara"; z(1) = "Nuha"; z(4/2) = "Ayan" Here, the last example shows that in general the index can be any expression that yields a whole number. There is one more way of defining an array: var z = Array("Zara", "Nuha", "Ayan") Following picture represents array myList. Here, myList holds ten double values and the indices are from 0 to 9. Processing Arrays: When processing array elements, we often use either for loop because all of the elements in an array are of the same type and the size of the array is known. Here is a complete example of showing how to create, initialize and process arrays: object Test {); } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test 1.9 2.9 3.4 3.5 Total is 11.7 Max is 3.5 C:/> Multi-Dimensional Arrays: There are many situations where you would need to define and use multi-dimensional arrays (i.e., arrays whose elements are arrays). For example, matrices and tables are examples of structures that can be realized as two-dimensional arrays. Scala does not directly support multi-dimensional arrays and provides various methods to process arrays in any dimension. Following is the example of defining a two-dimensional array: var myMatrix = ofDim[Int](3,3) This is an array that has three elements each being an array of integers that has three elements. The code that follows shows how one can process a multi-dimensional array: import Array._ object Test {(); } } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test 0 1 2 0 1 2 0 1 2 C:/> Concatenate Arrays: Following is the example which makes use of concat() method to concatenate two arrays. You can pass more than one array as arguments to concat() method. import Array._ object Test { ) } } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test 1.9 2.9 3.4 3.5 8.9 7.9 0.4 1.5 C:/> Create Array with Range: Following is the example, which makes use of range() method to generate an array containing a sequence of increasing integers in a given range. You can use final argument as step to create the sequence; if you do not use final argument, then step would be assumed as 1. import Array._ object Test { def main(args: Array[String]) { var myList1 = range(10, 20, 2) var myList2 = range(10,20) // Print all the array elements for ( x <- myList1 ) { print( " " + x ) } println() for ( x <- myList2 ) { print( " " + x ) } } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test 10 12 14 16 18 10 11 12 13 14 15 16 17 18 19 C:/> Scala Arrays Methods: Following are the important methods, which you can use while playing with array. As shown above, you would have to import Array._ package before using any of the mentioned methods. For a complete list of methods available, please check official documentation of Scala. Sc) Scala Classes & Objects A class is a blueprint for objects. Once you define a class, you can create objects from the class blueprint with the keyword new. Following is a simple syntax to define a class in Scala: class Point(xc: Int, yc: Int) { var x: Int = xc var y: Int = yc def move(dx: Int, dy: Int) { x = x + dx y = y + dy println ("Point x location : " + x); println ("Point y location : " + y); } } This class defines two variables x and y and a method: move, which does not return a value. Class variables are called, fields of the class and methods are called class methods. The class name works as a class constructor which can take a number of parameters. The above code defines two constructor arguments, xc and yc; they are both visible in the whole body of the class. As mentioned earlier, you can create objects using a keyword new and then you can access class fields and methods as shown below in the example: import java.io._ class Point(val xc: Int, val yc: Int) { var x: Int = xc var y: Int = yc def move(dx: Int, dy: Int) { x = x + dx y = y + dy println ("Point x location : " + x); println ("Point y location : " + y); } } object Test { def main(args: Array[String]) { val pt = new Point(10, 20); // Move to a new location pt.move(10, 10); } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Point x location : 20 Point y location : 30 C:/> Extending a Class: You can extend a base scala class in similar way you can do it in Java but there are two restrictions: method overriding requires the override keyword, and only the primary constructor can pass parameters to the base constructor. Let us extend our above class and add one more class method:); } } Such an extends clause has two effects: it makes class Location inherit all non-private members from class Point, and it makes the type Location a subtype of the type Point class. So here the Point class is called superclass and the class Location is called subclass. Extending a class and inheriting all the features of a parent class is called inheritance but scala allows the inheritance from just one class only. Let us take complete example showing inheirtance: import java.io._); } } object Test { def main(args: Array[String]) { val loc = new Location(10, 20, 15); // Move to a new location loc.move(10, 10, 5); } } Note that methods move and move do not override the corresponding definitions of move since they are different definitions (for example, the former take two arguments while the latter take three arguments). When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Point x location : 20 Point y location : 30 Point z location : 20 C:/> Singleton objects: Scala is more object-oriented than Java because in Scala we cannot have static members. Instead, Scala has singleton objects. A singleton is a class that can have only one instance, i.e., object. You create singleton using the keyword object instead of class keyword. Since you can't instantiate a singleton object, you can't pass parameters to the primary constructor. You already have seen all the examples using singleton objects where you called Scala's main method. Following is the same example of showing singleton: import java.io._ class Point(val xc: Int, val yc: Int) { var x: Int = xc var y: Int = yc def move(dx: Int, dy: Int) { x = x + dx y = y + dy } } object Test { def main(args: Array[String]) { val point = new Point(10, 20) printPoint def printPoint{ println ("Point x location : " + point.x); println ("Point y location : " + point.y); } } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Point x location : 10 Point y location : 20 C:/> Sc: If the behavior will not be reused, then make it a concrete class. It is not reusable behavior after all. If it might be reused in multiple, unrelated classes, make it a trait. Only traits can be mixed into different parts of the class hierarchy. If you want to inherit from it in Java code, use an abstract class. If you plan to distribute it in compiled form, and you expect outside groups to write classes inheriting from it, you might lean towards using an abstract class. If efficiency is very important, lean towards using a class. Scala Pattern Matching Pattern matching is the second most widely used feature of Scala, after function values and closures. Scala provides great support for pattern matching for processing the messages. A pattern match includes a sequence of alternatives, each starting with the keyword case. Each alternative includes a pattern and one or more expressions, which will be evaluated if the pattern matches. An arrow symbol => separates the pattern from the expressions. Here is a small example, which shows how to match against an integer value: object Test { def main(args: Array[String]) { println(matchTest(3)) } def matchTest(x: Int): String = x match { case 1 => "one" case 2 => "two" case _ => "many" } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test many C:/> The block with the case statements defines a function, which maps integers to strings. The match keyword provides a convenient way of applying a function (like the pattern matching function above) to an object. Following is a second example, which matches a value against patterns of different types: object Test { def main(args: Array[String]) { println(matchTest("two")) println(matchTest("test")) println(matchTest(1)) } def matchTest(x: Any): Any = x match { case 1 => "one" case "two" => 2 case y: Int => "scala.Int" case _ => "many" } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test 2 many one C:/> The first case matches if x refers to the integer value 1. The second case matches if x is equal to the string"two". The third case consists of a typed pattern; it matches against any integer and binds the selector value xto the variable y of type integer. Following is another form of writing same match...case expressions with the help of braces {...}: object Test { def main(args: Array[String]) { println(matchTest("two")) println(matchTest("test")) println(matchTest(1)) } def matchTest(x: Any){ x match { case 1 => "one" case "two" => 2 case y: Int => "scala.Int" case _ => "many" } } } Matching Using case Classes: The case classes are special classes that are used in pattern matching with case expressions. Syntactically, these are standard classes with a special modifier: case. Following is a simple pattern matching example using case class: object Test { def main(args: Array[String]) { val alice = new Person("Alice", 25) val bob = new Person("Bob", 32) val charlie = new Person("Charlie", 32) for (person <- List(alice, bob, charlie)) { person match { case Person("Alice", 25) => println("Hi Alice!") case Person("Bob", 32) => println("Hi Bob!") case Person(name, age) => println("Age: " + age + " year, name: " + name + "?") } } } // case class, empty one. case class Person(name: String, age: Int) } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Hi Alice! Hi Bob! Age: 32 year, name: Charlie? C:/> Adding the case keyword causes the compiler to add a number of useful features automatically. The keyword suggests an association with case expressions in pattern matching. First, the compiler automatically converts the constructor arguments into immutable fields (vals). The val keyword is optional. If you want mutable fields, use the var keyword. So, our constructor argument lists are now shorter. Second, the compiler automatically implements equals, hashCode, and toString methods to the class, which use the fields specified as constructor arguments. So, we no longer need our own toString methods. Finally, also, the body of Person class is gone because there are no methods that we need to define! Scala Regular Expressions Scala supports regular expressions through Regex class available in the scala.util.matching package. Let us check an example where we will try to find out word Scala from a statement: import scala.util.matching.Regex object Test { def main(args: Array[String]) { val pattern = "Scala".r val str = "Scala is Scalable and cool" println(pattern findFirstIn str) } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Some(Scala) C:/> We create a String and call the r( ) method on it. Scala implicitly converts the String to a RichString and invokes that method to get an instance of Regex. To find a first match of the regular expression, simply call the findFirstIn() method. If instead of finding only the first occurrence we would like to find all occurrences of the matching word, we can use the findAllIn( ) method and in case there are multiple Scala words available in the target string, this will return a collection of all matching words. You can make use of the mkString( ) method to concatenate the resulting list and you can use a pipe (|) to search small and capital case of Scala and you can use Regex constructor instead or r() method to create a pattern as follows: import scala.util.matching.Regex object Test { def main(args: Array[String]) { val pattern = new Regex("(S|s)cala") val str = "Scala is scalable and cool" println((pattern findAllIn str).mkString(",")) } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Scala,scala C:/> If you would like to replace matching text, we can use replaceFirstIn( ) to replace the first match or replaceAllIn( ) to replace all occurrences as follows: object Test { def main(args: Array[String]) { val pattern = "(S|s)cala".r val str = "Scala is scalable and cool" println(pattern replaceFirstIn(str, "Java")) } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Java is scalable and cool C:/> Forming regular expressions: Scala inherits its regular expression syntax from Java, which in turn inherits most of the features of Perl. Here are just some examples that should be enough as refreshers: Here is the table listing down all the regular expression metacharacter syntax available in Java: Regular-expression Examples: Note that. Check the following example: import scala.util.matching.Regex object Test { def main(args: Array[String]) { val pattern = new Regex("abl[ae]\\d+") val str = "ablaw is able1 and cool" println((pattern findAllIn str).mkString(",")) } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test able1 C:/> Scala Exception Handling Scala's exceptions work like exceptions in many other languages like Java. Instead of returning a value in the normal way, a method can terminate by throwing an exception. However, Scala doesn't actually have checked exceptions. When you want to handle exceptions, you use a try{...}catch{...} block like you would in Java except that the catch block uses matching to identify and handle the exceptions. Throwing exceptions: Throwing an exception looks the same as in Java. You create an exception object and then you throw it with the throw keyword: throw new IllegalArgumentException Catching exceptions: Scala allows you to try/catch any exception in a single block and then perform pattern matching against it using case blocks as shown below:") } } } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Missing file exception C:/> The behavior of this try-catch expression is the same as in other languages with exceptions. The body is executed, and if it throws an exception, each catch clause is tried in turn. The finally clause: You can wrap an expression with a finally clause if you want to cause some code to execute no matter how the expression terminates.") } } finally { println("Exiting finally...") } } } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Missing file exception Exiting finally... C:/> Scala Extractors. Following example shows an extractor object for email addresses: object Test { def main(args: Array[String]) { println ("Apply method : " + apply("Zara", "gmail.com")); println ("Unapply method : " + unapply("Zara@gmail.com")); println ("Unapply method : " + unapply("Zara Ali")); } // Test into an object that can be applied to arguments in parentheses in the same way a method is applied. So you can write Test("Zara", "gmail.com") to construct the string "Zara@gmail.com". The unapply method is what turns Test class into an extractor and it reverses the construction process of apply. Where apply takes two strings and forms an email address string out of them, unapply takes an email address and returns potentially two strings: the user and the domain of the address., or None, if str is not an email address. Here are some examples: unapply("Zara@gmail.com") equals Some("Zara", "gmail.com") unapply("Zara Ali") equals None When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test Apply method : Zara@gmail.com Unapply method : Some((Zara,gmail.com)) Unapply method : None C:/> Pattern Matching with Extractors: When an instance of a class is followed by parentheses with a list of zero or more parameters, the compiler invokes the apply method on that instance. We can define apply both in objects and in classes. As mentioned above, the purpose of the unapply method is to extract a specific value we are looking for. It does the opposite operation apply does. When comparing an extractor object using the match statement the unapply method will be automatically executed as shown below: object Test { def main(args: Array[String]) { val x = Test(5) println(x) x match { case Test(num) => println(x+" is bigger two times than "+num) //unapply is invoked case _ => println("i cannot calculate") } } def apply(x: Int) = x*2 def unapply(z: Int): Option[Int] = if (z%2==0) Some(z/2) else None } When the above code is compiled and executed, it produces the following result: C:/>scalac Test.scala C:/>scala Test 10 10 is bigger two times than 5 C:/> Scala Files I/O Scala is open to make use of any Java objects and java.io.File is one of the objects which can be used in Scala programming to read and write files. Following is an example of writing to a file: import java.io._ object Test { def main(args: Array[String]) { val writer = new PrintWriter(new File("test.txt" )) writer.write("Hello Scala") writer.close() } } When the above code is compiled and executed, it creates a file with "Hello Scala" content which you can check yourself. C:/>scalac Test.scala C:/>scala Test C:/> Reading line from Screen: Sometime you need to read user input from the screen and then proceed for some further processing. Following example shows you how to read input from the screen: object Test { def main(args: Array[String]) { print("Please enter your input : " ) val line = Console.readLine println("Thanks, you just typed: " + line) } } When the above code is compiled and executed, it prompts you to enter your input and it continues until you press ENTER key. C:/>scalac Test.scala C:/>scala Test scala Test Please enter your input : Scala is great Thanks, you just typed: Scala is great C:/> Reading File Content: Reading from files is really simple. You can use Scala's Source class and its companion object to read files. Following is the example which shows you how to read from "test.txt" file which we created earlier: import scala.io.Source object Test { def main(args: Array[String]) { println("Following is the content read:" ) Source.fromFile("test.txt" ).foreach{ print } } } When the above code is compiled and executed, it will read test.txt file and display the content on the screen: C:/>scalac Test.scala C:/>scala Test scala Test Following is the content read: Hello Scala C:/>
http://www.tutorialspoint.com/scala/scala_quick_guide.htm
CC-MAIN-2015-32
refinedweb
6,438
61.67
We can get an instance method reference in two ways, from the object instance or from the class name. Basically we have the following two forms. Here the instance represents any object instance. ClassName is the name of the class, such as String, Integer. instance and ClassName are called the receiver. More specifically, instance is called bounded receiver while ClassName is called unbounded receiver. We call instance bounded receiver since the receiver is bounded to the instance. ClassName is unbouned receiver since the receiver is bounded later. Bound Receiver receiver has the following form: instance::MethodName In their following code we use the buildin system functional interface Supplier as the lambda expression type. At first we define a lambda expression in a normal way. The lambda expression accepts no parameter and returns the length of a string 'java2s.com' Then we create a String instance with 'java2s.com' and use its length method as the instance method reference. Bound means we already specify the instance. The following example shows how to use bound receiver and method with no parameters to create Instance Method References. import java.util.function.Supplier; // ww w. j a v a 2 s . c om public class Main{ public static void main(String[] argv){ Supplier<Integer> supplier = () -> "java2s.com".length(); System.out.println(supplier.get()); Supplier<Integer> supplier1 = "java2s.com"::length; System.out.println(supplier1.get()); } } The code above generates the following result. The following example shows how to use bound receiver and method with parameters to create Instance Method References. import java.util.function.Consumer; /*ww w.ja v a2 s. c om*/ public class Main{ public static void main(String[] argv){ Util util = new Util(); Consumer<String> consumer = str -> util.print(str); consumer.accept("Hello"); Consumer<String> consumer1 = util::print; consumer1.accept("java2s.com"); util.debug(); } } class Util{ private int count=0; public void print(String s){ System.out.println(s); count++; } public void debug(){ System.out.println("count:" + count); } } The code above generates the following result. An unbound receiver uses the following syntax ClassName::instanceMethod It is the same syntax we use to reference a static method. From the following code we can see that the input type is the type of ClassName. In the following code we use String:length so the functional interface input type is String. The lambda expression gets the input when it is being used. The following code uses the String length method as unbind instance method reference. The String length method is usually called on a string value instance and returns the length of the string instance. Therefore the input is the String type and output is the int type, which is matching the buildin Function functional interface. Each time we call strLengthFunc we pass in a string value and the length method is called from the passed in string value. import java.util.function.Function; //from w w w . ja v a 2 s. c o m public class Main{ public static void main(String[] argv){ Function<String, Integer> strLengthFunc = String::length; String name ="java2s.com"; int len = strLengthFunc.apply(name); System.out.println("name = " + name + ", length = " + len); name =""; len = strLengthFunc.apply(name); System.out.println("name = " + name + ", length = " + len); } } The code above generates the following result. The following code defines a class Util with a static method called append. The append method accepts two String type parameters and returns a String type result. Then the append method is used to create a lambda expression and assigned to Java buildin BiFunction functional interface. The signature of append method matches the signature of the abstract method defined in BiFunction functional interface. import java.util.function.BiFunction; /*from w w w . j a v a 2s .c om*/ public class Main{ public static void main(String[] argv){ BiFunction<String, String,String> strFunc = Util::append; String name ="java2s.com"; String s= strFunc.apply(name,"hi"); System.out.println(s); } } class Util{ public static String append(String s1,String s2){ return s1+s2; } } The code above generates the following result. The keyword super, which is only used in an instance context, references the overridden method. We can use the following syntax to create a method reference that refers to the instance method in the parent type. ClassName.super::instanceMethod The following code defines a parent class called ParentUtil. In ParentUtil there is a method named append which appends two String value together. Then a child class called Util is created and extending the ParentUtil. Inside Util class the append method is overrided. In the constructor of Util we create two lambda expression, one is by using the append method from Util, the other is using the append method from ParentUtil class. We use this::append to reference the current class while using Util.super::append to reference the method from parent class. import java.util.function.BiFunction; //from w w w.jav a 2 s . c o m public class Main{ public static void main(String[] argv){ new Util(); } } class Util extends ParentUtil{ public Util(){ BiFunction<String, String,String> strFunc = this::append; String name ="java2s.com"; String s= strFunc.apply(name," hi"); System.out.println(s); strFunc = Util.super::append; name ="java2s.com"; s= strFunc.apply(name," Java Lambda Tutorial"); System.out.println(s); } @Override public String append(String s1,String s2){ System.out.println("child append"); return s1+s2; } } class ParentUtil{ public String append(String s1,String s2){ System.out.println("parent append"); return s1+s2; } } The code above generates the following result.
http://www.java2s.com/Tutorials/Java/Java_Lambda/0140__Java_Instance_Method_Reference.htm
CC-MAIN-2019-35
refinedweb
914
51.85
Sophistication in Web Applications? 197 whit537 asks: "Anyone who uses Gmail for 5 minutes can see that it's a pretty dern sophisticated web application. But just how sophisticated? Well, first of all, the UI is composed of no less than nine iframes (try turning off the page styles in Firefox with View > Page Style). But then consider that these iframes are generated and controlled by a 1149 line javascript. This script includes no less than 1001 functions, and 998 of these functions have one- or two-letter names! They're obviously not maintaining this script by hand in that form. So do they write human-readable javascripts and then run them all together, or have they developed some sort of web app UI toolkit in Python? Does Gmail need to be this complex or is the obfuscation a deliberate attempt to prevent reverse-engineering? And are there any other web apps that are this sophisticated?" not simply obfuscation (Score:5, Insightful) Re:not simply obfuscation (Score:2) Exactly... (Score:2) Re:Exactly... (Score:2) I agree. One webapp I wrote a while back I ended up needing to put the SHA1 hash algorithm on the client-side in javascript and I did the same thing in order to keep things as snappy as possible. I ripped it down to the smallest possible size (by hand) by renaming all the variables and function names as short as possible and eliminating all useless whitespace, etc. As a side effect the source became fairly obfuscated like Google's looks, not that I cared to obfuscate sha1. Re:not simply obfuscation (Score:2) I'm amazed that this even made Slashdot; Google often takes care to reduce the bandwidth usage of its pages (view source on google.com sometime....) I do a fair amount of JS coding, and I've taken to distributing the majority of my scripts [twinhelix.com] in two separate "commented" and "compressed" JS files for exactly that reason. My guess is that Google in fact has a fully commented and readable copy of their JavaScript code that they run through a preprocessor/compressor; I use my compressor [twinhelix.com] (basically a hacke Re:not simply obfuscation (Score:2) Re:not simply obfuscation (Score:2) Re:not simply obfuscation (Score:2) Re:not simply obfuscation (Score:2) Re:not simply obfuscation (Score:4, Informative) BTW... ever noticed how google uses text ads? Do you think the only reason they do that is because it's less intrusive? Wrong again - it also saves a lot of bandwidth compared to an image ad When you serve billions and billions of pages, shaving off a single byte on each page saves you GIGABYTES of traffic. Re:-1 WTF? (Score:2) I didnt say that ISPs charged google on a per-page basis, but what google pays for their bandwidth is the sum of all pages served and in there lies your per-page bandwidth cost. Supermarkets dont charge you by the pea, but if you buy a bag of 500 peas for $5.00, then you have a per-pea cost of $0.01. Also, if you have 50 part-time monkeys making $10K/year to swap out br Re:not simply obfuscation (Score:2) Why don't you and everyone who thinks so send me a nickle a day for the next several years. Chances are there aren't nearly as many of you who think that, as there are people who hit Gmail daily. And chances are still pretty good that a little less than $20/year from each of you would manage to pay off my mo Re:not simply obfuscation (Score:2) it's done for size and speed in j2me too(most of the time the obfuscation isn't _that_ hard to dig open if you wish). Re:not simply obfuscation (Score:2) Ever hear of CADOL? (Score:2) NL NL rather than NL 2. Siebel? (Score:1) Incredibly well designed, flexible, and scalable. They are years ahead of the nearest competition. Huh? (Score:1) 1) How the hell would we know? 2) Where did you get "in Python" from? Re:Huh? (Score:4, Informative) However, it was completely uncalled for speculation that had no place in a Slashdot article. I'm with you, "huh?" Python Programmers (Score:4, Funny) Re:Python Programmers (Score:2) Re:Python Programmers (Score:2) Please raise a bug report on SourceForge with the Python version, Windows OS and as short a sample script as you can that replicates the problem. However, in almost all cases this is caused by a misbehaving third-party library written in C/C+ Re:Python Programmers (Score:2) Re:Huh? (Score:3, Insightful) Google writes A LOT in Javascript. It would not surprise me, although I have no evidence of this, if they wrote the code in their choice editor and then ran a python app that condensed the code to remove space, renamed the functions, and replaced all function references. At 1000+ functions, if the function names had just 5 letters each (not much if you're not being terse), that would be Re:Huh? (Score:3, Insightful) Well, that js file would be cached by the browser, hopefully, not reloaded with every single page load. Re:Huh? (Score:2) One more for sure (Score:1) Its nicely obsfrucated, but it's open source... some of their algorithms and predicting phases are quite complex. I decided to do something a little more basic and written in C. mmmmmm C... not obsuring (Score:1, Redundant) Gmail not that impressive (Score:2, Insightful) (*Well, many 'real programmers' are loath to do rich client stuff in JS, perferring their server side frameworks instead. But once you get the hang of it, it's pretty nice.) Re:Gmail not that impressive (Score:3, Insightful) Looking at the Goog Re:big stinking piles of code (Score:2) I think this is really all I have to say about that. [icarus.net] Re:Gmail not that impressive (Score:2) Dreamweaver monkey's are web designers. They are visual types who can't handle even a simple code like html (which is why they are using an html editor rather than a text editor). Web developers are programmers who happen to work with web based applications. Two different animals. Re:Gmail not that impressive (Score:2) "Most webmasters have more skills then many coders. Application languages may be more complex but web programming and design take much more creativity." More skills at what? Most "webmasters" are frontpage users. Not saying all but most are just putting up html using a visual editor. Now some are very tallented. As to web design taking much more creativity? How do you figure? I mean an app is an app. None of your statments have any basis in fact. Indispensable web applications (Score:1, Troll) Lately, to me at least, a similar potential became clear with Suprnova (remember when someone posted the link to Jon Stewart on CNN?). But I fear for both of these web apps,for the same reason Real's attempt to own Internet video was bad, Apple's pitch to own web music is bad, and MS's ongoing attempts to own HTML, etc. are bad. The ubiquity of MP3s, Bit Torrent, Mozilla, Mplayer and VLC, and Linux in general have held back these bastards as they tr MSN's Web Messenger Is Impressive (Score:3, Interesting) MSN's [msn.com] is a web-based MSN client implemented using a combination of HTML and Javascript. The source for the javascript is available here [msn.com]. I was looking into how it worked the other day and tidied the source into a more readable form [bdash.net.nz]. At least MSN had the decency to leave human-readable function names... this fact alone makes the code reasonably understandable. Re:MSN's Web Messenger Is Impressive (Score:2) Re:MSN's Web Messenger Is Impressive (Score:2) Re:MSN's Web Messenger Is Impressive (Score:2) Sure, that would be an ideal tool to use. Sadly, it is designed for use with C and doesn't work very well with Javascript code. In particular it seems to struggle quite badly (which is understandable) with function definitions of the form: Gmail accounts (Score:2) Re:Gmail accounts (Score:2) Yes and No (Score:2) Re:Yes and No (Score:1) Re:Yes and No (Score:2) Of course, 1149 codes is after being condensed without comments. Anyone can probably crank out 1149 lines of javascript in a couple of days. The guesstimate of a month is with full testing, debugging, making it work cleanly and intuitively, creating the user interface, etc. Writing the code isn't always the hardest part of creating a fully functional site. The way you put it, you'd think that a single developer at Google just whipped out a why Python if you have JavaScript? (Score:2, Interesting) developed some sort of web app UI toolkit in Python? This is why I call Python "Java of the open source world".. so many people think all programming begins and ends in Python. JavaScript is *already* a sophisticated, object-oriented language. In fact the design of the language is somewhat cleaner than Python. Why do you think they would write it again in Python somehow? Re:why Python if you have JavaScript? (Score:2) I'll add that the majority of drawbacks in javascript are deficiencies in the DOM. 'select' objects and the rather inflexible event model come to mind as prime examples of this. Javascript, by itself, is a clean and compact scripting language that gets the job done well. And even those issues are fairly easy to code around thanks to how flexible javascript is. Sure... (Score:2) Is it at all needed? (Score:2) For example, embedded URLs in emails don't just link to the website, or utilise the 'target' attribute - they use javascript which prevents me from opening them in new tabs (I have to drag them onto the tab bar). Same thing goes for email conversations - but I can't drag those, so there's no tab Re:Is it at all needed? (Score:2) If you're talking about Firefox tabs, I've had no problem CTRL-clicking and open a link inside an email into a tab... Re:Is it at all needed? (Score:2) Re:Is it at all needed? (Score:2) Re:Is it at all needed? (Score:2) Re:Is it at all needed? (Score:2) (I also don't have an Adblocker installed) It's all a matter of perception (Score:3, Interesting) So they get back to the CEO's office, and he uses his PC to dial up Dow Jones News Retrieval service and runs a monster WAIS search.. which used a CM1 that Hillis sold to Dow Jones. Re:It's all a matter of perception-Hard Times. (Score:2) I'll start the list: 1. Counting votes. Re:It's all a matter of perception-Hard Times. (Score:2) GMail Invite (Score:2) Re:GMail Invite (Score:2) Plone never ceases to amaze me... (Score:2) Run one of the videos. If you have Plone give it a try. Slick. Bandwidth Savings (Score:2) Google's all about HTML/JS compression. I remember reading an article about how Google goes to great lengths to reduce their HTML/JS fingerprint. It ends up resulting in real savings. Browser Incompatibility (Score:2) Google needs to focus on their search engine (Score:2) math & case sensitivity observations (Score:2) I've never seen the gmail code, but let's do some basic math, shall we? 26 letters in the alphabet 676 2-letter combinations + 26 possibilities for the function names with 1 letter = 702. 702 possible function names, as long as we ignore case. BUT, AFAIK, javascript is a case sensative beast -- function "bb" is not the same at "BB". 998 - 702 = 296. 296 could be a couple of t Gmail is just the beginning of such apps (Score:2) Re:If they are smart, and they are, (Score:2) Re:If they are smart, and they are, (Score:1, Flamebait) Re:If they are smart, and they are, (Score:4, Informative) Java is an object oriented language, but I could certainly write Java code that would be a major headache to maintain if I chose to do so. I think most maintenance problems come from poor coding habits, and not the language its self. Re:If they are smart, and they are, (Score:2) People seem to have it out for Javascript at Slashdot. Heh. A lot of negativity towards the language. I've seen some beautiful javascript, but I guess I'm one of the few. Re:If they are smart, and they are, (Score:2) I think this is true, but language is a factor. I have a soft spot in my heart for Perl, but I would much rather get handed 100kloc of badly written Java than badly written Perl. Re:If they are smart, and they are, (Score:2) If you mean Perl doesn't force you to write good code the way Java does, yeah. And I'm happy for it when I need to do something hard, like multithreadded servers. But if you're suggesting Perl doesn't let you write good code you'd be wrong. There are some pretty big, successful, rapid-release perl projects out there that are well done and well maintained. Spamassassin is just one example of a public one. Ticketmaster would On the subject of code maintenance.... (Score:2) Javascript, Perl and other interpreted languages have one enormous advantage when it comes to maintenance - particularly when you're trying to maintain a system written by someone else. Because they're interpreted, you can always find the source code - and because you can copy the source directly from the production deployment, you know that the source you're working with is the correct version. You ever tried to debug or modify a compiled application where the original developer has moved the binaries onto Re:If they are smart, and they are, (Score:2) Re:If they are smart, and they are, (Score:4, Insightful) Re:If they are smart, and they are, (Score:1, Interesting) So it's funny when you and the original poster opine that it must be written in some "other" language.. JS is actually more sophisticated than, say, Python! I recently saw some JS code online that added many Ruby-like dy Yah, good for Javascript! (Score:3, Interesting) Dynamic languages are kick ass, I really really like them, but they're for prototyping, not writing maintainable Re:Yah, good for Javascript! (Score:4, Interesting) This is a myth [artima.com], and has been proven false countless of times, such as by these guys [zope.org], or these guys [gnu.org], or even these guys [php.net], or, God forbid, you may have heard of these guys [slashdot.org]. First, the term "interpreted dynamic language" is vague and misleading. Interpretation has nothing to do with code maintainability. (You can interpret C, and you can compile putatively interpreted languages such as Java [gnu.org] and Python [sourceforge.net] to native code; indeed Java has been natively compiled for years, and the fact that it is just-in-time compilation is irrelevant). And what does "dynamic" mean? Do you mean a dynamically, as opposed to statically, typed language? Do you mean runtime introspection? Self-modification and metaprogramming? Runtime name resolution? What? I suspect you mean a combination of these. Python, Perl, Ruby, JavaScript, PHP, Haskell, Lisp and OCaml have these features. C++ can be considered a "dynamic" language, as can Java, C#, etc. So why do you claim that these languages are not maintainable? These newfangled languages are more rapid to develop in than lower-level languages. Maintenance is simpler because the languages are simpler, higher-level and more easily maintained. For example, the absence of a separate compile/link cycle means I can get from changing a source line to testing the source line quicker. In many cases, reproducing or debugging a bug is simpler in, say, Python than in C, because the infrastructure itself is simpler. Pure Python, for example, does not have memory access violation errors; there's no way your Python code can read or write an invalid pointer, write beyond the end of a buffer and so on; a whole class of pointer errors, most of which have security repercussions, are annihilated by this feature. Similarly, Python uses exceptions, so nobody can forget to check and propagate a function's error return value. More often than not, errors that surface in these languages are high-level problems, which is good, because those are simpler than the ones involving someone forgetting to call free() on an allocated buffer or accounting for overflow when shifting a bit mask. The uncertainty involved in the dynamic typing/late binding model of such languages is compensated for through unit testing. Oh, and JavaScript, a "dynamic language", is being used by Google in a production system, and Google is known to use Python and Ruby in their systems. I suggest you call them up and tell them their languages aren't suitable. Re:Yah, good for Javascript! (Score:3, Interesting) Yes. JavaScript is a poor language, but for other reasons than a lack of a static type system. You're obviously trolling. Present us with arguments supporting to the proposition that dynamic typing decreases maintainability, and we'll have a discussion. Until then, you're just spouting FUD. I have already given ample explanations for my view, but here's a counter Re:Yah, good for Javascript! (Score:3, Interesting) Re:Yah, good for Javascript! (Score:3, Interesting) Please. I mentioned unit testing in my first reply, where I wrote: I also linked to an interview with Guido van Rossum where he talks about this very topic, so if you think I'm ignorant It shouldn't be that hard. (Score:2) this means the execution environment determines the behavior of namespace/module resolution issues (by dynamically loading code from external files or whatever as required with some intrinsic) This way you can break your project into independant modules, implement test harnesses, etc. javascript allows you to scope and prototype objects and defintions. You provide namespace scoping by using nested definitions. And prototypes (i.e. classes, not clonable objects) Re:Yah, good for Javascript! (Score:2) AOP is pretty integral to maintaining dynamic code, but thats yet another can of worms. the situation gets messy, but ultimately it needs to get messy. dynamic languages are convenient to code in, but essential to the future of web architecture. system.reflection is pretty juicy stuff. Re:If they are smart, and they are, (Score:2) I second this question! I'm also wondering why the heck the original post that started this whole thread has been modded Flamebait? Re:If they are smart, and they are, (Score:2) Re:Just quick and easy (Score:1) Re:Just quick and easy (Score:4, Interesting) Kirby Re:Just quick and easy (Score:4, Funny) For the number of times "ii" occurs in english you could save yourself a character, right there. Now, I'm off for a bit of skiing... Re:Just quick and easy (Score:3, Funny) Re:Just quick and easy (Score:3, Funny) Re:Just quick and easy (Score:2) The sig is what makes this post doubleplusfunny. But to stay on topic: I you wrote proper code, you would never, never ever search/grep or a variable like 'i'. Such a variable name belongs to a small loop (e.g. one that iterates through an array and increments an index) that can be read and understood quickly. It still might be better, depending on the circumstances, to call it 'count', 'xCoord', or 'offset', if Re:Just quick and easy (Score:3, Interesting) Aside: I generally use c for my incrementing variables, and foo for my unknown type variables (common in r Re:Just quick and easy (Score:2) I could probably map a macro to something like "search whole word" if I felt like it. However, I've fo Re:Just quick and easy (Score:2) Without quite a bit of extra typing? You just need to add triangle brackets to signifiy the word break. (Don't forget to escape them.) For instance, if I want to search for words ending in x, I type "/x\>" (the front slash to signify the search). For the word "i" all alone, it's "/\". Simple as that. Re:Just quick and easy (Score:2) vi-family editors will certainly edit on a word by word basis - hit right and then 'w' (see above for searching on a word), and I'm fairly certain that vim will allow you to specify what constitutes a word break. I agree that vi is pretty powerful. I used it for most of a decade when I was on Unix variants. -- Evan Re:Just quick and easy (Score:2) Oh my. Re:Just quick and easy (Score:4, Informative) In regular C, for a function, it's actually highly useful, and extremely desirable. Go look in any extremely large body of C code. You'll find out just how desireable C static functions are. They are used all the time inside of the Linux kernel. They guarantee that the linker won't make that symbol available externally. Which is great for avoiding two different functions with the same symbol. In regular C, using static on a global variable also makes it have no external linkage. It also moves where the memory is actually set aside. Which changes when it gets initialized, and how large your executables are. I believe it's also a very good way to ensure your variables are safe to use in signal handler context. This is common go look at the Linux kernel. Happens all the time. Highly useful application of static. In regular C, using static on a variable in function scope, can be useful, if it is also a constant. In that case, you can move the space off the stack and into the BSS section (at least under UNIX, I forget the equivilent under a Win32 platform). This again is used all the time in the Linux kernel. It shrinks the stack usage and saves space. As I recall, you want to declare all strings that are constants as: static const char foo[] = "foobar"; It saves space in the kernel. I forget exactly why it does right off hand, but it has to do with the assembly that GCC outputs. I might have that wrong, you might want to do "*foo" instead of "foo[]", but you get the idea. Now, in regular C using static on a variable in function scope to store state between function calls is an excellent way to introduce a race condition. So your blanket statement that "static is evil" is blantantly wrong. Whoever or where ever you learned it from was mimicing what they'd heard before without any understanding. Next you'll be telling me there's no good use of "goto's" either. Which isn't the case, they are few and far between, but they do exist. I've never come across one, but I am aware of when they would be useful. I could make use of them, but generally performance, cache coherency, and "the fast path" aren't things I generally have to worry about. Those are the types of problems we just throw more hardware at and keep the code highly maintainable. I learned about a lot of thinks from my old boss, but the "iii" was one of the truely unique things, I've never seen anywhere else. Any good text on C programming will explain what I just did. The small useful things that you just deal with all the time because you never stop to think of a better solution. Paul didn't have too many of those. He recongized anytime something was difficult, and pondered the problem until it was easy. Just like, he knew when static was a good idea. Although I actually picked that up at school, during the explaination of Ada. You can do something named modules in Ada, that you can somewhat duplicate in C with static and separate translations units. The truly most useful thing Paul ever taught me was just assume all of your code leaks memory. Assume your code will segfault because there is a bug in it. Assume you'll get crappy data that will lead to pathelogical cases. Design your code so that as much as possible that doesn't make any difference in terms of your ability to process data that doesn't cause a memory leak, a segfault, or isn't malformed. Never get stuck when there is more data to try. If some data elements blow up your code. Timestamp when you tried them and don't retry them for a while so you can work on other data. Any time you are doing batch processing, you should always have a parent that is deathly simple that will spawn children that do the real work. Re:Just quick and easy (Score:2) I was poking fun at your tidbit because these days IDEs will find variables for you without text searches. Re:Just quick and easy (Score:2) Which reminds me of my favourite sites to visit when my internet access is through monitored firewalls (corporate or university systems) [lanl.gov] [cornell.edu] [adelaide.edu.au] [uni-augsburg.de] Yep, folks - Governments and Universities worldwide, hosting xxx sites. And for what it's worth, xxx.lanl.gov was the original! Re:Just quick and easy (Score:2) Re:Just quick and easy (Score:2) It's much easier then just scanning for them by eye, and more accurate (computers are good at looking for a series of text, my eyes aren't nearly as fast). Plus I won't have the Re:Just quick and easy (Score:2) Well, aren't you clever not bothering to clarify so I could possibly understand what you mean. I've never said anything about naming anything besides an "increment" variable used in a for loop. You start discussing giving variables "meaningful names". In sections you can't refactor. there is no spoon (Score:1) Also which has been touched on alot. For a script that size, chances are what we see is probably the product of code optimization and or a front end program that translates human readable code into smaller "optimized" code for the site. Re:Just quick and easy (Score:1) The objection may be valid for an interpreted language; however, for this, you can use a pre-processor that will substitute a terse compact name for a long explicit one. I did that some 15 years ago when I had to maintain 15-20 year old Business-Basic code which did not allow more than 1 letter + 1 digit variables name. I wrote a pre-compiler that allowed lo
http://tech.slashdot.org/story/04/12/12/025207/sophistication-in-web-applications
CC-MAIN-2015-11
refinedweb
4,548
71.85
When writing in C, we need to follow a set of rules in order for the code to run properly. These rules are known as syntax. As we go through each lesson we will learn new syntax on the topics being covered. Let’s look at the Hello World code to examine common syntax that will exist in most (if not all) of your programs. #include <stdio.h> int main() { // output a line printf("Hello World!\n"); } Case Sensitivity Most of the words in the code use all lowercase letters. This is known as case-sensitivity. Whether lowercase or uppercase, certain words in the code must follow the correct case in order for the code to run. The only lines of text that can change case are the comment and the text between quotes. The Semicolon All statements, like the printf() statement, need to end with a semicolon. This identifies the end of the statement and is needed for the code to run correctly. Double Quotes The text in between the double quotes "is known as a string (think of a string of characters). All strings must be surrounded by double-quotes. So what happens when we break the rules? The answer is errors. The below text is an error that is output when we leave off the semicolon from the printf() statement in our Hello World code. script.c: In function ‘main’: script.c:6:1: error: expected ‘;’ before ‘}’ token } ^ The text above gives the following information: - The component location, In function ‘main’ - The line and column number, 6:1 - A description, expected ‘;’ before ‘}’ As we can see the message does its best to help us solve the errors in our code. Instructions Uh oh! Someone broke the Hello World code in script.c. Run the code to view the error in the console. Given the error in the console, can you fix the code in script.c. There’s still an error in the console. Given this new error, can you fix the code? The code should run and output the text as expected. When writing code, you will always come across errors. (One more time) When writing code, you will always come across errors. Good programmers do not write perfect code the first time, or the second time (or the third time…). Good programmers are able to understand an error, identify what’s causing it, and correct it within the code. Run the code one more time and move on to the next exercise.
https://www.codecademy.com/courses/learn-c/lessons/c-hello-world-lesson/exercises/errors
CC-MAIN-2022-21
refinedweb
417
83.25
- 11 Jun 2022 16:32:27 UTC - Distribution: Specio - Module version: 0.48 - Source (raw) - Browse (raw) - Changes - How to Contribute - Repository - Issues (3) - Testers (595 / 1 / - NAME - VERSION - SYNOPSIS - DESCRIPTION - WHAT IS A TYPE? - BUILTIN TYPES - PARAMETERIZABLE TYPES - REGISTRIES AND IMPORTING - CREATING A TYPE LIBRARY - DECLARING TYPES - USING SPECIO WITH Moose - USING SPECIO WITH Moo - USING SPECIO WITH OTHER THINGS - Moose, MooseX::Types, and Specio - OPTIONAL PREREQS - WHY THE NAME? - LONG-TERM PLANS - SUPPORT - SOURCE - DONATIONS - AUTHOR - CONTRIBUTORS NAME Specio - Type constraints and coercions for Perl VERSION version 0.48 SYNOPSIS'); DESCRIPTION The Speciodistribution. WHAT IS A TYPE?. BUILTIN TYPEStype accepts anything and everything. The Booltype only accepts undef, 0, or 1. The Undeftype only accepts undef. The Definedtype accepts anything except undef. The Numand Inttypes are stricter about numbers than Perl is. Specifically, they do not allow any sort of space in the number, nor do they accept "Nan", "Inf", or "Infinity". The ClassNametype constraint checks that the name is valid and that the class is loaded. The FileHandletype accepts either a glob, a scalar filehandle, or anything that isa IO::Handle. All types accept overloaded objects that support the required operation. See below for details. Overloadingsetting for the overloaded object. In other words, an object that overloads stringification will not pass the Booltype check unless it also overloads boolification. Most types do not check that the overloaded method actually returns something that matches the constraint. This may change in the future. The Booltype accepts an object that implements booloverloading. The Strtype accepts an object that implements string ( q{""}) overloading. The Numtype accepts an object that implements numeric ( '0+'}) overloading. The Inttype does as well, but it will check that the overloading returns an actual integer. The ClassNametype will accept an object with string overloading that returns a class name. To make this all more confusing, the Valuetype will never accept an object, even though some of its subtypes will. The various reference types all accept objects which provide the appropriate overloading. The FileHandletype accepts an object which overloads globification as long as the returned glob is an open filehandle. PARAMETERIZABLE TYPES Any type followed by a type parameter of `ain the hierarchy above can be parameterized. The parameter is itself a type, so you can say you want an "ArrayRef of Int", or even an "ArrayRef of HashRef of ScalarRef of ClassName". When they are parameterized, the ScalarRefand ArrayReftypes check that the value(s) they refer to match the type parameter. For the HashReftype, the parameter applies to the values (keys are never checked). Maybe The Maybetype is a special parameterized type. It allows for either undefor a value. All by itself, it is meaningless, since it is equivalent to "Maybe of Item", which is equivalent to Item. When parameterized, it accepts either an undefor the type of its parameter. This is useful for optional attributes or parameters. However, you're probably better off making your code simply not pass the parameter at all This usually makes for a simpler API. REGISTRIES AND IMPORTINGto Intwill not be seen by any other package, unless that package explicitly imports your Inttype. When you import types, you import every type defined in the package you import from. However, you can overwrite an imported type with your own type definition. You cannot define the same type twice internally. CREATING A TYPE LIBRARY. DECLARING TYPES Use the Specio::Declare module to declare types. It exports a set of helpers for declaring types. See that module's documentation for more details on these helpers. USING SPECIO WITH Moose This should just work. Use a Specio type anywhere you'd specify a type. USING SPECIO WITH Moo Using Specio with Moo is easy. You can pass Specio constraint objects as isaparameters. USING SPECIO WITH OTHER THINGS See Specio::Constraint::Simple for the API that all constraint objects share. Moose, MooseX::Types, and Specio: Types names are strings, but they're not global. No type auto-creation Types are always retrieved using the t()subroutine. If you pass an unknown name to this subroutine it dies. This is different from Moose and MooseX::Types, which assume that unknown names are class names. Anon types are explicit With Moose and MooseX::Types, you use the same subroutine, subtype(), to declare both named and anonymous types. With Specio, you use declare()for named types and anon()for anonymous types. Class and object types are separate Moose and MooseX::Types have class_type. Overloading support is baked in Perl's overloading is quite broken but ignoring it makes Moose's type system frustrating to use in many cases. Types can either have a constraint or inline generator, not both. Coercions can be inlined I simply never got around to implementing this in Moose. No crazy coercion features Moose has some bizarre (and mostly) undocumented features relating to coercions and parameterizable types. This is a misfeature. OPTIONAL PREREQS There are several optional prereqs that if installed will make this distribution better in some way. Installing this will speed up a number of type checks for built-in types. If this is installed it will be loaded instead of the B module if you have Perl 5.10 or greater. This module is much more memory efficient than loading all of B. If one of these is installed then stack traces that end up in Specio code will have much better subroutine names for any frames. WHY THE NAME? This distro was originally called "Type", but that's an awfully generic top level namespace. Specio is Latin for for "look at" and "spec" is the root for the word "species". It's short, relatively easy to type, and not used by any other distro. LONG-TERM PLANS Eventually I'd like to see this distro replace Moose's internal type system, which would also make MooseX::Types obsolete. SUPPORT Bugs may be submitted at. SOURCE The source code repository for Specio Chris White <chrisw@leehayes.com> cpansprout <cpansprout@gmail.com> Graham Knop <haarg@haarg.org> Karen Etheridge <ether@cpan.org> Vitaly Lipatov <lav@altlinux.ru> This software is Copyright (c) 2012 - 2022 by Dave Rolsky. This is free software, licensed under: The Artistic License 2.0 (GPL Compatible) The full text of the license can be found in the LICENSE file included with this distribution. Module Install Instructions To install Specio, copy and paste the appropriate command in to your terminal. cpanm Specio perl -MCPAN -e shell install Specio For more information on module installation, please visit the detailed CPAN module installation guide.
https://web-stage.metacpan.org/pod/Specio
CC-MAIN-2022-27
refinedweb
1,092
67.45
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include <assert.h> #include <sys/mman.h> #include "uwac-priv.h" #include "uwac-utils.h" #include "uwac-os.h" creates a window using a SHM surface destroys the corresponding UwacWindow sets a rectangle as dirty for the next frame of a window retrieves a pointer on the current window content to draw a frame returns the geometry of the given UwacWindow buffer returns the geometry of the given UwacWindows Sets or unset the fact that the window is set fullscreen. After this call the application should get prepared to receive a configure event. The output is used only when going fullscreen, it is optional and not used when exiting fullscreen. Sets the region of the window that can trigger input events Sets the region that should be considered opaque to the compositor. When possible (depending on the shell) sets the title of the UwacWindow Sends a frame to the compositor with the content of the drawing buffer
http://pub.freerdp.com/api/uwac-window_8c.html
CC-MAIN-2019-18
refinedweb
172
60.01
DBIx::Class::Migration::FAQ - Answers to Frequently Asked Questionsfb4e2174a80 "DBI::db" ARRAY = 0x0 KEYS = 0 FILL = 0 MAX = 7 RITER = -1 There's some debugging code somewhere in the DBIx::Class::DeploymentHandler dependency chain. We've looked and can't find it :( A case of beer at the next YAPC we meet at to whoever can figure it out. Although if you see this, there's nothing wrong. The command will work as in the documentation. The only issue is that if there's a lot of debugging scroll by, you might need to page up in your terminal to catch any real error messages. UPDATE: I recieved an email from a contributor who has code dived a bit and possibly narrowed down the issue. I thought to report that stuff here. When SQLT::Parser::DBIx::Class attempts to serialize schemas that are connected it trys to serialize the connect object information. This doesn't play so nice with the way the YAML serializer works, since it does try hard to serialize objects. UPDATE: This may be fixed in recent updates to the DBIC ecosystem. If you are seeing this try upgraded DBIC and SQLT and see if it goes away. UPDATE: You really should not be seeing this anymore, if you are, and you're on the lastest DBIC and related, please let me know. Contributing to the project is easy. First you'd fork the project over at Github (), clone the repo down to your work environment and install project dependencies: cpanm Dist::Zilla dzil authordeps | cpanm dzil listdeps | cpanm You should first have a Modern Perl setup, including local::lib. If you need help installing Perl and getting started, please take a look at the Learning Perl website: I realize Dist::Zilla seems to invoke feelings on nearly religious vigor (both for and against). After considering the options I felt using it, but carefully constraining the use of plugins to the default set was a better option than what I've done on other projects, which is to have a custom wrapper on top of Module::Install (you can peek at that if you want over here:). I've decided I'd rather not continue to spend my limited time dealing with dependency and installation management tools, when there's a rational solution that many people already embrace available. The only other option is to continue to build and maintain code for this purpose that nobody else uses, and possibly nobody else understands. If a better option with equal or greater maturity and community acceptance emerges, I'll entertain changing. If the requirement of typing cpanm Dist::Zilla and using the dzil command line tool for a limited number of build jobs prevents you from contributing, I think you are unreasonably stubborn. If you do contribute to the project, please be aware that I'm not likely to accept patches that include significant changes to the way I'm using Dist::Zilla, including using plugins to weave pod, automagically guess dependencies and generate tests. I'd prefer to stick to the most simple and standard Perl practices for building and installing code for the present. Having the database sandboxes automatically created in the share directory is a nice feature, but can clutter your repository history. You really don't need that in the source control, since installing and controlling your database is something each developer who checks out the project should do. If you are using git you can modify your .gitignore. If you sandbox is share/myapp-schema.db or (if you are using the mysql or postgresql sandboxes) share/myapp-schema/ you can add these lines to your .gitignore share/musicbase-schema/* share/musicbase-schema.db Other source control systems have similiar approaches (recipies welcome for sharing). dbic-migration -Ilib status \ --dsn="DBI:mysql:database=test;host=127.0.0.1;port=5142" \ --user msandbox \ --password msandbox Would be the general approach. Its been reported that the developer database sandboxing feature doens't work properly when using AppArmor. I guess AppArmor considers this a security violation. Currently I don't have a reported workaround other than to just disable AppArmor, which for developer level machines may or may not be a problem. Here's some symptoms of this problem, if you think you may be having it: In /var/log/syslog something like: kernel: [18593519.090601] type=1503 audit(1281847667.413:22): operation="inode_permission" requested_mask="::r" denied_mask="::r" fsuid=0 name="/etc/mysql0/my.cnf" pid=4884 profile="/usr/sbin/mysqld" If you need AppArmor, you'll have to setup and install MySQL the 'old school way.' You didn't specify a custom --target_dir but forgot to make the /share directory in your project application root. By default we expect to find a /share directory in your primary project root directory (contains your Makefile.PL or dist.ini, and the lib and t directories for example) where we will create migrations. At this time we can't automatically create this /share directory in the same way we can create all the migration files and directory for you. You need to create that directory yourself: mkdir share Patches to fix this, or suggestions welcomed. Not everyone loves using an ORM. Personally I've found DBIx::Class to be the only ORM that gets enough out of my way that I prefer it over plain SQL, and I highly recommend you give it a go. However if you don't want to, or can not convince your fellow programers (yet :) ), here's one way to still use this migrations and fixtures system. Strictly speaking, we are stilling using DBIx::Class behind the scenes, just you don't have to know about it or use it. You use a subclass of DBIx::Class::Schema::Loader in a namespace for your application, like: package MyApp::Schema; use strict; use warnings; use base 'DBIx::Class::Schema::Loader'; our $VERSION = 1; __PACKAGE__->naming('current'); __PACKAGE__->use_namespaces(1); __PACKAGE__->loader_options( ); 1; You'd put that in lib/MyApp/Schema.pm along with your other application code. then just use MyApp::Schema as is discussed in the documentation. This will dynamically build a schema for you, as long as you set the schema arguments to connect to your actual database. Then everytime someone changes the database you just up the $VERSION and take it from there. Obviously this is a bit more manual effort, but at least you can have the ability to populate to any given version, manage fixtures, do some sane testing, etc. Maybe you'll even be able to convince people to try out DBIx::Class! By the way, if you wanted, you can always dump the generated version of your classes using make_schema (see "make_schema" in DBIx::Class::Migrations and "make_schema" in DBIx::Class::Migrations::Script). That's because MySQL does not support transaction DDL. Even if you have a transaction, MySQL will silently COMMIT when it bumps into some DDL. You don't always have the luxury of building a new database from the start. For example, you may have an existing database that you want to start to create migrations for. In this case you probably want to dump some data directly from that existing database in order to create fixtures for testing or for seeding a new database. DBIx::Class::Migration will let you run the dump_all_sets and dump_named_sets commands against an unversioned database. For example: dbic-migration -Ilib -SMyApp::Schema dump_all_sets / --dsn="dbi:mysql:database=myapp;host=10.0.88.98;port=3306" / --username johnn / --password $PASSWORD In this case let's say "dbi:mysql:database=myapp;host=10.0.88.98;port=3306" is a database that you've been managing manually and it has some data that is useful for creating your fixture sets. When you run these commands against an unversioned database you will be warned because we have no way of being sure what version of the fixture sets you should be dumping. We will just assume that whatever the Schema version is, is correct. This can of course lead to bad or undumpable fixtures should your Schema and the unversioned DB not match properly. Buyer beware! Sorry this error is vague and I am working on a fix. You will get this if you have failed to provide a schema_class, either by setting it with the -S or -schema_class commandline option flag: dbic-migration -Ilib -SMyApp::Schema dbic-migration -Ilib --schema_class MyApp::Schema or by exporting the %ENV: export DBIC_MIGRATION_SCHEMA_CLASS=MyApp::Schema Or, if you have a custom version of DBIx::Class::Migration::Script as discussed in the tutorial, you are not providing a good schema_class value. You probably forgot to include your project lib directory in the Perl search path. The easiest way to fix this is to use the I or lib command line: option flag: dbic-migration -Ilib -SMyApp::Schema [command] There will be more like solution is as presented. Since I prefer not to change my system settings permanently, just add the following to a little bash script in my application sudo sysctl -w kern.sysv.shmall=65536 sudo sysctl -w kern.sysv.shmmax=16777216 I don't know enough about Postgresql to know if the above settings are good, but they work for my testing. Corrections very welcome! Ideally I'd try to find a way to offer a patch to Test::postgresql, although this seems to be limited to people using Macs. Here's the release steps I currently use, should I eventually find willing comainters: You need to increment the version in dist.ini and in DBIx::Class::Migration Update the Changes file. It would be ideal to have been adding to this as you've gone along, but you should double check. If there have been new contributors, be sure to give credit pod2markdown < lib/DBIx/Class/Migration.pm > README.mkdn This will make sure the README.mkdn file in the project root matches the most recent updates. AUTHOR_MODE=1 prove -lvr t git add @stuff_to_add git commit -m $message git push dzil build dzil release dzil clean I usually don't update and push the tag until we are on CPAN. git tag $VERSION -m 'cpan release' git push --tags See DBIx::Class::Migration for author information See DBIx::Class::Migration for copyright and license information
http://search.cpan.org/~jjnapiork/DBIx-Class-Migration-0.041/lib/DBIx/Class/Migration/FAQ.pod
CC-MAIN-2015-18
refinedweb
1,725
61.46
95548/how-to-replace-all-occurrences-of-a-string I have this string: "Test abc test test abc test test test abc test test abc" Doing: str = str.replace('abc', ''); seems to only remove the first occurrence of abc in the string above. How can I replace all occurrences of it? The general pattern is str.split(search).join(replacement) This used to be faster in some cases than using replaceAll and a regular expression, but that doesn't seem to be the case anymore in modern browsers. Conclusion: If you have a performance-critical use case (e.g processing hundreds of strings), use the Regexp method. But for most typical use cases, this is well worth not having to worry about special characters. You can use the code below: lis = ...READ MORE class Solution: def firstAlphabet(self, s): self.s=s k='' k=k+s[0] for i in range(len(s)): if .. Using scan should do the trick: string.scan(/regex/) READ MORE There is no simple built-in string function ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/95548/how-to-replace-all-occurrences-of-a-string
CC-MAIN-2021-21
refinedweb
182
68.77
Overview This article introduces the Zato scheduler, a feature of Zato that lets one configure API services and background jobs to be run at selected intervals and according to specified execution plans without any programming needed. Future articles will introduce all the details whereas this one gently describes major features such as overall architecture, one-time jobs, interval-based jobs, cron-style jobs, and public API to manage jobs from custom tools. Architecture In Zato, scheduler jobs are one of the channels that a service can be invoked from. That is, the same service can be mounted on a REST, SOAP, AMQP, ZeroMQ, or any other channel in addition to being triggered from the scheduler. No changes in code are required to achieve it. For instance, let's consider the code below of a hypothetical service that downloads billings from an FTP resource and sends them off to a REST endpoint: from zato.server.service import Service class BillingHandler(Service): def handle(self): # Download data .. ftp = self.outgoing.('Billings') contents =('/data/current.csv') # .. and send it to its recipient. self.outgoing.plain_http['ERP'].post(self.cid, contents) Nowhere in the service is any reference embedded as to how it will be invoked and this is the crucial part of Zato design. Services only focus on doing their jobs not on how to make themselves available from one channel or another. Thus, even if a service to bulk transfer billings initially will likely be invoked from the scheduler only, there is nothing preventing it from being triggered by a REST call or from command line as needed. Or, perhaps a message sent to an AMQP should trigger it. That is fine as well, and the service will not need to be changed to accommodate it. Working With Scheduler Jobs There are three types of scheduler jobs: - One-time jobs: Great if a service should be invoked at a specific time, but it does not need to be repeated further. - Interval-based jobs: Let one specify how often to invoke a given service (i.e., once in four weeks, twice an hour, five times a minute), as well as when to stop it so as to form complex plans such as "after two weeks from now, invoke this service twice an hour but do it twelve times only." - Cron-style jobs: Work similar to interval-based ones but use syntax of Cron so 00 3-6 * * 1-5will mean "run the service every hour on the hour from 3 a.m. to 6 a.m., but only Monday to Friday (i.e., excluding weekends)." Note that job definitions always survive cluster restarts. This means that if you fully shut down a whole cluster of Zato servers then all jobs will continue to execute once the server is back. However, any jobs missed during the downtime will not be re-scheduled. When a job is being triggered, its target service can receive extra data that may be possibly needed for that service to perform its tasks. This data is completely opaque to Zato and can be in any format, JSON, XML, YAML, plain text, anything. If a job should not be scheduled anymore (be it because it was a one-time job or because it reached its execution limit) it becomes inactive rather than being deleted. Such an inactive job still is available in web-admin and can be made active again, possibly with a different schedule plan. On the other hand, actually deleting a job deletes it permanently. Full public API is available to manage jobs either through REST or SOAP calls, as well as from other services directly in Python, such as below: from zato.common import SCHEDULER class JobManager(Service): def handle(self): # Create a sample job that will trigger one of built-in test services self.invoke('zato.scheduler.job.create', { 'cluster_id': self.server.cluster_id, 'is_active': True, 'name': 'My Sample', 'service': 'zato.helpers.input-logger', 'job_type': SCHEDULER.JOB_TYPE.INTERVAL_BASED, 'seconds': 2, }) Stay Tuned for More This was just the first installment that introduced core concepts behind the Zato scheduler. Coming up are details of how to work with each kind of the jobs, their APIs, and how to efficiently manage their definitions in source code repositories. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/triggering-api-services-with-zato-scheduler-part-1
CC-MAIN-2017-04
refinedweb
719
60.65
Join the community to find out what other Atlassian users are discussing, debating and creating. I have two Go repos. One called foo - which is a package, and one called bar - which pulls in foo as a dependency. Because of Go's archaic (a more appropriate description is probably against community guidelines) package management system, you have to specify the full domain name of a package that you pull in - so in my file `bitbucket.org/ae-ou/bar/cmd/main.go`, I have: import "bitbucket.org/ae-ou/foo" I initially had an issue where I was getting this output go get bitbucket.org/ae-ou/foo: reading: 403 Forbidden server response: Access denied. You must have write or admin access. After looking up the output, several sites told me to run the following to force a connection via SSH: git config --global url."git@bitbucket.org:".insteadOf "" When I run `go get` to pull in the foo dependency, I now get a slightly different error: go: downloading bitbucket.org/ae-ou/foo v0.1.0 go get bitbucket.org/ae-ou/foo: bitbucket.org/ae-ou/foo@v0.1.0: verifying module: bitbucket.org/ae-ou/foo@v0.1.0: reading: 410 Gone server response: not found: bitbucket.org/ae-ou/foo@v0.1.0: reading: 403 Forbidden It does seem to be finding and accessing my foo repo because the latest tag in it is 0.1.0, but the go get command is still failing due to a 403. I know for certain that I don't lack permissions on my SSH key, because I use that SSH key to push to/pull from Bitbucket all the time. Resolved. I had to restart the SSH agent in order for the .gitconfig change to take effect. I was then faced with the following issue: main.go:5:2: bitbucket.org/ae-ou/foo@v0.1.0: verifying module: bitbucket.org/ae-ou/foo@v0.1.0: reading: 410 Gone This is something that I'd addressed before - for those facing the same issue, it's just a matter of exporting a variable to prevent Go from looking up the repo in sum.golang.org - as follows: export GOPRIVATE="bitbucket.org/ae-ou".
https://community.atlassian.com/t5/Bitbucket-questions/How-do-I-run-go-get-on-a-private-repo/qaq-p/1385040
CC-MAIN-2021-43
refinedweb
375
57.87
Small thing with a C++ thing I'm doing in DirectX. It's not particularly important, it is just forcing me to put a few extra comparisons to make sure it doesn't become confounding. This little script makes an object move smoothly up and down over the top of something else (an arrow over a selectable object). It has been modified to you see the Y values and speeds through Cout. The problem lies in the output -- see my accellaration modifier is done in incraments of 0.1, so the result should never have more than two decimal places. However consistently on the 55th iteration of my code it gives me the result 0.00999978. This is given as the result of subtracting .11 from 0.12. Science! Anyway, check the code. Weird as. Why is it doing it? Am I doing something wrong? #include <iostream> using namespace std; int main() { cout << "Bounce" << endl; float yLoc = 1.0f; float yAcc = 0.01f; float yLocMax = 2.0f; float ySpeedMax = 0.1f; float ySpeed = 0.0f; bool up = true; int i = 0; // loop counter while(i < 56) { // move up // If we are moving up and not at the top yet if (yLoc <= yLocMax && up) { if (ySpeed <= ySpeedMax) { ySpeed += yAcc; } } // If we need to turn around from up if (yLoc >= yLocMax && ySpeed >= 0.0f && up) { up = false; } // move down if (yLoc > -yLocMax && !up) { if (ySpeed >= -ySpeedMax) { ySpeed -= yAcc; } } // Turn around from down if (yLoc <= -yLocMax && !up) { up = true; } yLoc += ySpeed; cout << "yLoc: " << yLoc << " \tySpeed: " << ySpeed << "\tUp: " << up << endl; i += 1; } return 0; }
http://www.rohitab.com/discuss/topic/40303-cc-adding-floats-and-getting-unexpected-results/
CC-MAIN-2020-40
refinedweb
260
84.88
Arch Window Description An Arch Window is a base object for all kinds of "embeddable" objects, such as windows and doors. It is designed to be either independent, or "hosted" inside another component such as an Arch Wall, Arch Structure, or Arch Roof. It has its own geometry, that can be made of several solid components (commonly a frame and inner panels), and also defines a volume to be subtracted from the host objects, in order to create an opening. Window objects are based on closed 2D objects, such as Draft Rectangles or Sketches, that are used to define their inner components. The base 2D object must therefore contain several closed wires, that can be combined to form filled panels (one wire) or frames (several wires). The Window tool features several presets; this allows the user to create common types of windows and doors with certain editable parameters, without the need for the user to create the base 2D objects and components manually. All information applicable to an Arch Window also applies to an Arch Door, as it's the same underlying object. The main difference between a Window and a Door is that the Door has an internal panel that is shown opaque (the door itself), while the Window has a panel that is partially transparent (the glass). Window constructed on top of a Draft Rectangle, then inserted into an Arch Wall. Using the Arch Add operation automatically cuts a correct opening in the host wall. Complex window being constructed on top of a Sketch. When entering the window's edit mode you can create different components, set their thickness, and select and assign wires from the sketch to them. How to use Using a preset - Optionally, select an Arch object. If no object is selected, the window will be inserted in the object under the mouse when placing the window. - Press the Arch Window button, or press W then I keys. - Select one of the presets in the list. - Fill out the desired parameters. - Press the OK button. Note: if you install the "Parts Library" from the AddonManager, the window tool will search this library for additional presets. These presets are FreeCAD files containing a single window based on a parametric sketch that has named constrains. You may place additional presets in the parts_library directory so that they are found by the window tool. $ROOT_DIR/Mod/parts_library/Architectural\ Parts/Doors/Custom/ $ROOT_DIR/Mod/parts_library/Architectural\ Parts/Windows/Custom/ The $ROOT_DIR is the user's directory where FreeCAD configuration, macros, and external workbenches are stored. - On Linux it is usually /home/username/.FreeCAD/ - On Windows it is usually C:\Users\username\Application Data\FreeCAD\ - On Mac OSX it is usually /Users/username/Library/Preferences/FreeCAD/ Creating from scratch - Optionally, select a face on the Arch object where you want the window to be included. - Switch to the Sketcher Workbench. - Create a new sketch. - Draw one or more closed wires. - Close the sketch. - Switch back to the Arch Workbench. - Press the Arch Window button, or press W then I keys. - Enter Edit mode by double-clicking the window in the tree view, to adjust the window components. Presets The following presets are available: Building components Windows can include 3 types of components: panels, frames and louvres. Panels and louvres are made from one closed wire, which gets extruded, while frames are made from 2 or more closed wire, where each one is extruded, then the smaller ones are subtracted from the biggest one. You can access, create, modify and delete components of a window in edit mode (double-click the window in the Tree view). The components have the following properties: - Name: A name for the component - Type: The type of component. Can be "Frame", "Glass panel", "Solid panel" or "Louvres" - Wires: A comma-separated list of wires the component is based on - Thickness: The extrusion thickness of the component - Z Offset: The distance between the component and its base 2D wire(s) - Hinge: This allows you to select an edge from the base 2D object, then set that edge as a hinge for this component and the next ones in the list - Opening mode: If you defined a hinge in this component or any other earlier in the list, setting the opening mode will allow the window to appear open or to display 2D opening symbols in plan or elevation. Options - Windows share the common properties and behaviours of all Arch Components - If the Auto-includecheckbox on the Window creation task panel is unchecked, the window won't be inserted into any host object on creation. - Add a selected window to a wall by selecting both, then pressing the Arch Add button. - Remove a selected window from a wall by selecting the window, then pressing the Arch Remove button. - When using presets, it is often convenient to turn the "Near" Draft Snap on, so you can snap your window to an existing face. - The hole created by a window in its host object is determined by two properties: DATAHole Depth and DATAHole Wire (introduced in version 0.17). The Hole Wire number can be picked in the 3D view from the window's task panel available when double-clicking the window in the tree view - Windows can make use of Multi-Materials. The window will search in the attached Multi-Material for material layers with a same name for each of its window component, and use it if any is found. For example, a component named "OuterFrame" will search in the attached Multi-Material, for a material layer named "OuterFrame". If such material layer is found, its material will be attributed to the OuterFrame component. The thickness value of the material layer is disregarded. Openings See also: Tutorial for open windows Doors and windows can appear partially or fully open in the 3D model, or can display opening symbols both in plan and/or elevation. Consequently, these will also appear in extracted 2D views generated by Draft Shape2DView or TechDraw Workbench or Drawing Workbench. To obtain this, at least one of the window components must have a hinge and an opening mode defined (see the Building components above). Then, using the DATAOpening, DATASymbol Plan or DATASymbol Elevation properties, you can configure the appearance of the window: A door showing the symbol plan, symbol elevation and opening properties at work Properties - DATAHeight: The height of this window - DATAWidth: The width of this window - DATAHole Depth: The depth of the hole created by this window in its host object - DATAHole Wire: The number of the wire from the base object that is used to create a hole in the host object of this window. This value can be set graphically when double-clicking the window in the tree view. Setting a value of 0 will make the window automatically pick its biggest wire for the hole. - DATAWindow Parts: A list of strings (5 strings per component, setting the component options above) - DATALouvre Width: If any of the components is set to "Louvres", this property defines the size of the louvre elements - DATALouvre Spacing: If any of the components is set to "Louvres", this property defines the spacing between the louvre elements - DATAOpening: All components that have their opening mode set, and provided a hinge is defined in them or in an earlier component in the list, will appear open by a percentage defined by this value - DATASymbol Plan: Shows 2D opening symbol in plan - DATASymbol Elevation: Shows 2D opening symbol in elevation Scripting See also: Arch API and FreeCAD Scripting Basics. The Window tool can be used in macros and from the Python console by using the following function: Window = makeWindow(baseobj=None, width=None, height=None, parts=None, name="Window") - Creates a Windowobject based on baseobj, which should be a well formed, closed Draft Wire or Sketcher Sketch. - If available, sets the width, height, and name(label) of the Window. - If the baseobjis not a closed shape, the tool may not create a proper solid figure. Example: import FreeCAD, Draft, Arch Rect1 = Draft.makeRectangle(length=900, height=3000) Window = Arch.makeWindow(Rect1) FreeCAD.ActiveDocument.recompute() You can also create a Window from a preset. Window = makeWindowPreset(windowtype, width, height, h1, h2, h3, w1, w2, o1, o2, placement=None) - Creates a Windowobject based on windowtype, which should be one of the names defined in Arch.WindowPresets - Some of these presets are: "Fixed", "Open 1-pane", "Open 2-pane", "Sash 2-pane", "Sliding 2-pane", "Simple door", "Glass door", "Sliding 4-pane". widthand heightdefine the total size of the object, with units in millimeters. - The parameters h1, h2, h3(vertical offsets), w1, w2(widths), o1, and o2(horizontal offsets) specify different distances in millimeters, and depend on the type of preset being created. - If a placementis given, it is used. Example: import FreeCAD, Arch base = FreeCAD.Vector(2000, 0, 0) Axis = FreeCAD.Vector(1, 0, 0) place=FreeCAD.Placement(base, FreeCAD.Rotation(Axis, 90)) Door = Arch.makeWindowPreset("Simple door", width=900, height=2000, h1=100, h2=100, h3=100, w1=200, w2=100, o1=0, o2=100, placement
https://www.freecadweb.org/wiki/index.php?title=Arch_Window
CC-MAIN-2019-30
refinedweb
1,516
50.36
Migrating from v2 to v3 Have you run into something that’s not covered here? Add your changes to GitHub! Introduction This is a reference for upgrading your site from Gatsby v2 to Gatsby v3. Since the last major release was in September 2018, Gatsby v3 includes a couple of breaking changes. If you’re curious what’s new, head over to the v3.0 release notes. If you want to start a new Gatsby v3 site, run npm init gatsbyor yarn create gatsbyin your terminal. Table of Contents - Updating Your Dependencies - Handling Breaking Changes - Future Breaking Changes - For Plugin Maintainers - Known Issues Updating Your Dependencies First, you need to update your dependencies.. Many plugins won’t need updating so they may keep working (if not, please check their repository for the current status). You can run an npm script to see all outdated dependencies. npm Compare the “Wanted” and “Latest” versions and update their versions accordingly. For example, if you have this outdated version: Install the new package with: yarn You’ll be given an overview of packages which to select to upgrade them to latest. dependencies for plugins that are not yet updated Gatsby has an amazing ecosystem of plugins that make it easier to get up and running, and to incorporate various data sources and functionality into your Gatsby project. Part of that huge ecosystem includes dependency trees! Depending on how the plugin authors have declared dependencies (e.g. marking a package as a dependency instead of a peerDependency) within those plugins, there could be a myriad of failures that arise. If you encounter any of these issues when migrating your project to Gatsby Version 3, we recommend that you use Yarn resolutions within your package.json. Please note: If you rely on a plugin that is not found within the list of plugins within the Gatsby framework, you very well may need to use the following resolutions in the near term. The specific resolutions we recommend at this time are found below: Handling version mismatches When upgrading an already existing project (that has an existing node_modules folder and package-lock.json file) you might run into version mismatches for your packages as npm/yarn don’t resolve to the latest/correct version. An example would be a version mismatch of webpack@4 and webpack@5 that can throw an error like this: An effective way to get around this issue is deleting node_modules and package-lock.json and then running npm install again. Alternatively, you can use npm dedupe. Handling Breaking Changes This section explains breaking changes that were made for Gatsby v3. Most, if not all, of those changes had a deprecation message in v2. In order to successfully update, you’ll need to resolve these changes. Minimal Node.js version 12.13.0 We are dropping support for Node 10 as it is approaching maintenance EOL date (2021-04-30). The new required version of Node is 12.13.0. See the main changes in Node 12 release notes. Check Node’s releases document for version statuses. webpack upgraded from version 4 to version 5 We tried our best to mitigate as much of the breaking change as we could. Some are sadly inevitable. We suggest looking at the official webpack 5 blog post to get a comprehensive list of what changed. If you hit any problems along the way, make sure the Gatsby plugin or webpack plugin supports version 5. ESLint upgraded from version 6 to version 7 If you’re using Gatsby’s default ESLint rules (no custom eslintrc file), you shouldn’t notice any issues. If you do have a custom ESLint config, make sure to read the ESLint 6 to 7 migration guide src/api is a reserved directory now With the release of Gatsby 3.7 we introduced Functions. With this any JavaScript or TypeScript files inside src/api/* are mapped to function routes like files in src/pages/* become pages. This also means that if you already have an existing src/api folder you’ll need to rename it to something else as it’s a reserved directory now. Gatsby’s Link component The APIs push, replace & navigateTo in gatsby-link (an internal package) were deprecated in v2 and now with v3 completely removed. Please use navigate instead. Removal of __experimentalThemes The deprecated __experimentalThemes key inside gatsby-config.js was removed. You’ll need to define your Gatsby themes inside the plugins array instead. Removal of pathContext The deprecated API pathContext was removed. You need to rename instances of it to pageContext. For example, if you passed information inside your gatsby-node.js and accessed it in your page: Removal of boundActionCreators The deprecated API boundActionCreators was removed. Please rename its instances to actions to keep the same behavior. For example, in your gatsby-node.js file: Removal of deleteNodes The deprecated API deleteNodes was removed. Please iterate over the nodes instead and call deleteNode: Removal of fieldName & fieldValue from createNodeField The arguments fieldName and fieldValue were removed from the createNodeField API. Please use name and value instead. Removal of hasNodeChanged from public API surface This API is no longer necessary, as there is an internal check for whether or not a node has changed. Removal of sizes & resolutions for image queries The sizes and resolutions queries were deprecated in v2 in favor of fluid and fixed. While fluid, fixed, and gatsby-image will continue to work in v3, we highly recommend migrating to the new gatsby-plugin-image. Read the Migrating from gatsby-image to gatsby-plugin-image guide to learn more about its benefits and how to use it. Calling touchNode with the node id Calling touchNode with a string (the node id) was deprecated in Gatsby v2. Pass the full node to touchNode now. Calling deleteNode with the node id Calling deleteNode with a string (the node id) was deprecated in Gatsby v2. Pass the full node to deleteNode now. Removal of three gatsby-browser APIs A couple of gatsby-browser APIs were removed. In the list below you can find the old APIs and their replacements: getResourcesForPathnameSync=> loadPageSync getResourcesForPathname=> loadPage replaceComponentRenderer=> wrapPageElement Using a global graphql tag for queries Until now you were able to use the graphql tag for queries without explicitly importing it from Gatsby. You now have to import it: import { graphql } from 'gatsby' CSS Modules are imported as ES Modules The web moves forward and so do we. ES Modules allow us to better treeshake and generate smaller files. From now on you’ll need to import cssModules as: import { box } from './mystyles.module.css' You can also import all styles using the import * as styles syntax e.g. import * as styles from './mystyles.module.css'. However, this won’t allow webpack to treeshake your styles so we discourage you from using this syntax. Migrating all your CSS could be painful or you’re relying on third-party packages that require you to use CommonJS. You can work around this issue for Sass, Less, Stylus & regular CSS modules using respective plugins. If you’re using regular CSS modules, please install gatsby-plugin-postcss to override the defaults. This example covers Sass. The other plugins share the same cssLoaderOptions property. File assets (fonts, pdfs, …) are imported as ES Modules Assets are handled as ES Modules. Make sure to switch your require functions into imports. If you’re using require with expression or require.context (which is not recommended), you’ll have to append .default to your require statement to make it work. webpack 5 node configuration changed (node.fs, node.path, …) Some components need you to patch/disable node APIs in the browser, like path or fs. webpack removed these automatic polyfills. You now have to manually set them in your configurations: If it’s still not resolved, the error message should guide you on what else you need to add to your webpack config. process is not defined A common error is process is not defined. webpack 4 polyfilled process automatically in the browser, but with v5 it’s not the case anymore. If you’re using process.browser in your components, you should switch to a window is not undefined check. If you’re using any other process properties, you want to polyfill process. - Install processlibrary - npm install process - Configure webpack to use the process polyfill. GraphQL: character escape sequences in regex filter In v2, backslashes in regex filters of GraphQL queries had to be escaped twice, so /\w+/ needed to be written as "/\\\\w+/". In v3, you only need to escape once: GraphQL: __typename field is no longer added automatically In v2, the __typename field used to be added implicitly when querying for a field of abstract type (interface or union). In v3, __typename has to be added explicitly in your query: Schema Customization: Add explicit childOf extension to types with disabled inference Imagine you have node type Foo that has several child nodes of type Bar (so you expect field Foo.childBar to exist). In Gatsby v2 this field was added automatically even if inference was disabled for type Foo. In Gatsby v3 you must declare a parent-child relationship explicitly for this case: To make upgrading easier, check the CLI output of your site on the latest v2 and follow the suggestions when you see a warning like this: If you don’t see any warnings, you are safe to upgrade to v3. If this warning is displayed for a type defined by some plugin, open an issue in the plugin repo with a suggestion to upgrade (and a link to this guide). You can still fix those warnings temporarily in your site’s gatsby-node.js file until it is fixed in the plugin. Related docs: Schema Customization: Extensions must be set explicitly Starting with v3, whenever you define a field of complex type, you must also assign the corresponding extension (or a custom resolver): In Gatsby v2, we add those extensions for you automatically but display a deprecation warning. To make upgrading easier, when you see a warning like the one below, check the CLI output of your site on the latest v2 and follow the suggestions provided. If this warning is displayed for a type defined by some plugin, open an issue in the plugin repo with a suggestion to upgrade (and a link to this guide). You can still fix those warnings temporarily in your site’s gatsby-node.js until it is fixed in the plugin. If you don’t see any warnings, you are safe to upgrade to v3. Read more about custom extensions in this blog post. Schema Customization: Removed noDefaultResolvers argument from inference directives noDefaultResolvers entries and remove them: Deprecation announcement for noDefaultResolvers. Schema Customization: Remove many argument from childOf directive The many argument is no longer needed for the childOf directive in Gatsby v3: Schema Customization: Consistent return for nodeModel.runQuery For Gatsby v2, nodeModel.runQuery with firstOnly: false returns null when nothing is found. In v3 it returns an empty array instead. To upgrade, find all occurrences of runQuery (with firstOnly: false or not set) and make sure checks for emptiness are correct: Note: When using argument firstOnly: true the returned value is object or null. So do not confuse those two cases. Future Breaking Changes This section explains deprecations that were made for Gatsby v3. These old behaviors will be removed in v4, at which point they will no longer work. For now, you can still use the old behaviors in v3, but we recommend updating to the new signatures to make future updates easier. touchNode For Gatsby v2 the touchNode API accepted nodeId as a named argument. This now has been deprecated in favor of passing the full node to the function. In case you only have an ID at hand (e.g. getting it from cache or as __NODE), you can use the getNode() API: deleteNode For Gatsby v2, the deleteNode API accepted node as a named argument. This now has been deprecated in favor of passing the full node to the function. @nodeInterface For Gatsby v2, @nodeInterface was the recommended way to implement queryable interfaces. Now it is deprecated in favor of interface inheritance: JSON imports: follow the JSON modules web spec JSON modules are coming to the web. JSON modules only allow you to import the default export and no sub-properties. If you do import properties, you’ll get a warning along these lines: webpack deprecation messages When running community Gatsby plugins, you might see [DEP_WEBPACK] messages popup during the “Building JavaScript” or the “Building SSR bundle” phase. These often mean that the plugin is not compatible with webpack 5 yet. Contact the Gatsby plugin author or the webpack plugin author to flag this issue. Most of the time Gatsby will build fine, however there are cases that it won’t and the reasons why could be cryptic. Using fs in SSR Gatsby v3 introduces incremental builds for HTML generation. For this feature to work correctly, Gatsby needs to track all inputs used to generate HTML file. Arbitrary code execution in gatsby-ssr.js files allow usage of fs module, which is marked as unsafe and results in disabling of this feature. To migrate, you can use import instead of fs: For Plugin Maintainers In most cases, you won’t have to do anything to be v3 compatible. But one thing you can do to be certain your plugin won’t throw any warnings or errors is to set the proper peer dependencies. gatsby should be included under peerDependencies of your plugin and it should specify the proper versions of support. If your plugin supports both versions: Known Issues This section is a work in progress and will be expanded when necessary. It’s a list of known issues you might run into while upgrading Gatsby to v3 and how to solve them. reach-router We vendored reach-router to make it work for React 17. We added a webpack alias so that you can continue using it as usual. However, you might run into an error like this after upgrading: To resolve the error above, make sure that you have updated all dependencies. It’s also possible that you have an outdated .cache folder around. Run gatsby clean to remove the outdated cache. In some situations the webpack alias will be ignored, so you will need to add your own alias. The most common example is in Jest tests. For these, you should add the following to your Jest config: Configuring using a jest.config.js file: Configuring using package.json: webpack EACCES You might see errors like these when using Windows or WSL: Gatsby will continue to work. Please track the upstream issue to see how and when this will be fixed. yarn workspaces Workspaces and their hoisting of dependencies can cause you troubles if you incrementally want to update a package. For example, if you use gatsby-plugin-emotion in multiple packages but only update its version in one, you might end up with multiple versions inside your project. Run yarn why package-name (in this example yarn why gatsby-plugin-emotion) to check if different versions are installed. We recommend updating all dependencies at once and re-checking it with yarn why package-name. You should only see one version found now.
https://www.gatsbyjs.com/docs/reference/release-notes/migrating-from-v2-to-v3
CC-MAIN-2022-40
refinedweb
2,566
63.8
SikuliXIDE; Mojave; Robot Framework: TypeError: unsupported operand type(s) for -: 'unicode' and 'int' I seem to be getting hung up in something that I think ought to be easy, but I've been going round and round with this for a while. I a have function that worked fine in Jython (outside the Robot Framework), but when I put it in the Robot Framework I had to change things up a bit by converting a string back to an integer in order to get the "If, elif, else" to work. I'm attempting to use this function for moving around a table grid so I can write values into the cells. The problem is, the grid movement actually appears to work (because I can see it highlight the next cell I intend to write to), but it throws the following error and fails my keyword. The error is: TypeError: unsupported operand type(s) for -: 'unicode' and 'int' FOLLOWING IS ROBOT FRAMEWORK DRIVING IT ALL: runScript(""" robot *** Variables *** ${TESTAPP} "/Applications/ ${moveright} 1 ${moveleft} 2 ${moveup} 3 ${movedown} 4 *** Settings *** Library ./inline/GA4 Suite Setup Suite Setup Actions Suite Teardown Suite Teardown Actions Test Teardown Test Case Tear Down *** Test Cases *** Test Manual Copy and Paste to Cell [Documentation] Test Case to do a manual copy and paste to cell (Table view) Log Executing Manual Copy and Paste to Cell Test press manual entry button press view options button select table view initialize table ${TESTAPP} write to selected cell 2 grid movement ${movedown} 1 *** Keywords *** Suite Setup Actions Log Suite Setup Actions done below start my application ${TESTAPP} define application region Suite Teardown Actions Log Suite Teardown Actions done below stop my application ${TESTAPP} Test Case Tear Down Log Test Teardown Actions done below prep for next test """) FOLLOWING IS THE FUNCTION (I've stripped all out of the Class but that one to shorten it up here): class GA4(object): def grid_movement(self, direction, loopvalue): print "Executing 'grid_movement' function" movdirection = int(direction) while(loopvalue > 0): if movdirection == 1: elif movdirection == 2: elif movdirection == 3: elif movdirection == 4: else: break loopvalue -= 1 print loopvalue FOLLOWING IS THE ROBOT FRAMEWORK OUTPUT: Test Execution Log 00:00:13.367SUITE GA4-experimenting Full Name: GA4-experimenting Source: /Users/ Start / End / Elapsed: 20190418 12:56:33.650 / 20190418 12:56:47.017 / 00:00:13.367 Status: 1 critical test, 0 passed, 1 failed 1 test total, 0 passed, 1 failed 00:00:02.021SETUP Suite Setup Actions 00:00:01. 00:00:09.550TEST Test Manual Copy and Paste to Cell Full Name: GA4-experimenti Documentation: Test Case to do a manual copy and paste to cell (Table view) Start / End / Elapsed: 20190418 12:56:35.869 / 20190418 12:56:45.419 / 00:00:09.550 Status: FAIL (critical) Message: TypeError: unsupported operand type(s) for -: 'unicode' and 'int' 00:00:00.002KEYWORD BuiltIn . Log Executing Manual Copy and Paste to Cell Test 00:00:02.949KEYWORD GA4 . Press Manual Entry Button 00:00:00.770KEYWORD GA4 . Press View Options Button 00:00:02.239KEYWORD GA4 . Select Table View 00:00:03.342KEYWORD GA4 . Initialize Table ${TESTAPP} 00:00:00.174KEYWORD GA4 . Write To Selected Cell 2 00:00:00.062KEYWORD GA4 . Grid Movement ${movedown}, 1 Start / End / Elapsed: 20190418 12:56:45.350 / 20190418 12:56:45.412 / 00:00:00.062 12:56:45.404 INFO Executing 'grid_movement' function [log] TYPE "#DOWN." 12:56:45.412 FAIL TypeError: unsupported operand type(s) for -: 'unicode' and 'int' 00:00:00. Question information - Language: - English Edit question - Status: - Solved - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Solved by: - Melvin Raymond - Solved: - 2019-04-22 - Last query: - 2019-04-22 - Last reply: - 2019-04-20 Ugh! OK, that was an overlooked miss on my part. Didn't even think about the other variable getting converted since I had not used a variable for it in Robot Framework. I just typed in the loopvalue I wanted into the tabbed Robot Framework. I didn't even try that since the error looked as if it were coming from the type and Key statement. Apparently the Robot Framework even converts the values to string that aren't assigned a Variable in the Variables part of Robot Framework. I just added "loopvalue = int(loopvalue)" in there and that fixed it. I haven't searched for it, but there could be something built into Robot Framework that defines these kind of tabbed values. Thanks Railman. Good to come back Monday and find I can keep it rolling along. Did I type in "RailMan"? Or did that auto correct do that? Sorry RaiMan, RailMan = RaiMan LOL - really no problem - I know its me ;-) apparently loopvalue is a string, when handed over to the GA4::grid_movement. so this should help: def grid_movement(self, direction, cells = 1): print "Executing 'grid_movement' function" movdirection = int(direction) loopvalue = int(cells) while(loopvalue > 0): ... usage with one-cell movement: grid movement ${movedown}
https://answers.launchpad.net/sikuli/+question/680320
CC-MAIN-2019-39
refinedweb
826
60.04
# Safe-enough linux server, a quick security tuning ###### The case: You fire up a professionally prepared Linux image at a cloud platform provider (Amazon, DO, Google, Azure, etc.) and it will run a kind of production level service moderately exposed to hacking attacks (non-targeted, non-advanced threats). **What would be the standard quick security related tuning to configure** before you install the meat? release: 2005, Ubuntu + CentOS (supposed to work with Amazon Linux, Fedora, Debian, RHEL as well) ![image](https://habrastorage.org/r/w780q1/webt/uz/4-/rx/uz4-rxgsswcljbazh1nbzao427o.jpeg) > ###### Disclaimers: > > > > * Read the micro threat model given in "The case" paragraph above before you compare my advise with your requirements. Otherwise please help me improve this page with your comments! > * It's not at all a guide for Docker containers or for a seriously hardened production server. > * The author is not a certified unix admin or a devsecops guru. > * It's a subjective advise! > > > > Assumptions =========== * You are already experienced in Linux administration. * As an admin you work on a trusted machine (desktop, laptop or tablet), properly hardened, continuously patched, with security not weared off by years of usage or being shared. * There is a handy facility to generate and store passwords available on the above admin client. * Your SSH keys or other credentials to admin the instance are handled securely on your admin machine. (An issue in itself how to satisfy this requirement. Like should your keys be enclosed in password or token protected key stores or drives?!) * Your access to the cloud platform provider is secure enough, and your user/account there is handled with security in mind. * You manage the cloud platform provider's web UI in a browser which you use solely for admin tasks. * To sum the above: The security context of your installation (instance cooking) work is on a higher level of assurance than the level you would expect from the instance you configure. * You use a Linux image provided by a party whose security competence is assumed (like the image baked by the cloud platform provider themselves or the Linux distro builder). * You login to the instance via a trusted and clean terminal app or within admin browser via the native web-CLI. The best approach is to adhere to cyber hygiene practices from the zero moment and not rely on an idea that security can be hardened in a bonus step afterwards. > Our advice would be to use — a corporate managed — iPad as a trusted admin client at a remote location or a managed desktop at the enterprise internal network. In case you/the company are/is subject for targeted attacks then laptops are a weaker choice for paranoids. But that's an other story to which I will dedicate an article or a post later. The steps ========= ### Have a matching firewall protection enabled I mean which serves as the internet facing firewall behind which your instance is running. Amazon calls it the security group, in DO that's the firewalls feature of a project. This filters ports internet connections can hit on your instance. Plan which external firewall preset will match the open ports of the instance you are installing. SSH + the ports of services. The external firewall may allow more ports as the preset may serve several types of instances. The good old approach to minimize the open ports is still valid. You may have different firewall presets ready for different stages of your installations. Like in the beginning port 22 is the one you start off, but when a non standard SSH port is configured you may switch to a preset with 22 closed forever and the corresponding internet noise will not hit your instance. ### Choosing the initial connect/login method 1. In case you have your working practice to connect to the cloud platform resources, connect as you deem it safe. Otherwise: 2. If the cloud platform allows I would choose the option with connecting to ssh with a random root password generated by the platform (DO offers this). See the explanation of my pro-passwords (long random passwords) point of view in a below section ('Password authentication is wrong, key files rule! (Disagreed️)'). ### Don't let a fresh instance run as is in the wild Be paranoid, don't let a fresh instance to run unpatched while exposed to internets. When fired up log in within minutes to start patching. ### Login … and ``` sudo -s ``` > Unless otherwise indicated all below commands you issue as **root**. ### Make sure the bare system is up-to-date ``` # Ubuntu apt update apt ugrade # CentOS dnf update dnf upgrade # + both optionally: shutdown -r now ``` Restarting is technically not necessary, but it won't harm. ### Basic steps * Enable the local firewall and allow some ports. (It's indeed useless to enable the firewall if you already enabled a frontfacing firewall on the provider's host, but still.) * Set the timezone. Ubuntu: ``` ufw enable ufw allow 22, <*your custom ssh, eg 52112*> # see the SSHd setup below ufw status # make your time meaningful, change the location: timedatectl set-timezone Europe/Berlin ``` CentOS: ``` # may firewalld not present: dnf install firewalld systemctl enable firewalld --now # then firewall-cmd --state # should be '*running*' firewall-cmd --list-all # should be telling something like: # *public (active)* # *services: cockpit dhcpv6-client ssh* firewall-cmd --list-all-zones | more firewall-cmd --get-default-zone # public # add a custom ssh port, see the SSHd setup below firewall-cmd --permanent --zone=public --add-port=<*your custom ssh, eg 52112*>/tcp # success firewall-cmd --complete-reload firewall-cmd --list-all # now should also contain your custom ssh port # make your time meaningful, change the location: timedatectl set-timezone Europe/Berlin ``` ### SELinux, AppArmor Check the AppArmor or SELinux status: ``` # Ubuntu: apparmor_status # CentOS: sestatus ``` It's not that you should bother with it, but it's still more secure to utilize an Linux distro preconfigured with LSM in **enforcing mode active**, so try to make it an aspect when you choose your provider and select from the factory maintained images there. However based on other's opinions and considering the "micro threat model" set out above I would not make it a prerequisite. In order for LSM to make real sense it should be tuned adequately to your services and the particular situation. ### Comfort first Make yourself comfortable and productive, like: * zsh, [fish](https://fishshell.com) or a similar advanced shell will help you a lot (tuning it is not discussed here). * wget is used in this guide. * Don't hesitate to install the editor of choice as early as possible (I promote micro here, tho it's a bit more complicated to install). ``` apt/dnf install zsh wget which zsh echo $0 # Ubuntu snap install micro --classic ``` > Yes I start with suppressing a security alert. Let's be realistic. Let's face it all systems are non-secure, even bash had horrible security flaws, everything has, maybe except ssh written by paranoid and methodic freaks.)) Alternatively install micro from: <https://github.com/zyedidia/micro/releases/> * See my suggestions in the below 'Then services, folders, extensions' section. ``` mkdir -p /tank/packagez chown -R root:wheel /tank/packagez mkdir /opt/bin cd /tank/packagez wget https://github.com/zyedidia/micro/releases/download/v2<...>/micro-2<...>-linux64.tar.gz tar xf micro... mv micro.../micro /opt/bin/ chown -R root:root /opt/bin chmod -R 755 /opt/bin export PATH="$PATH:/opt/bin" micro /etc/profile.d/env.sh # create or add export PATH="$PATH:/opt/bin" # * or to the PATH in /etc/environment if that one is in use by the os ``` ### Password quality Mod the password quality settings: ``` # on Ubuntu you may need to install this first apt install libpam-pwquality # then micro /etc/security/pwquality.conf # enable: minlen = 20 ocredit = -2 ``` This will set the minimum password length to 20 and require 2 special characters in it. Why 20? Ok, let it be 25. See also my bellow comment regarding passwords vs key files. (I suggest using random passwords and SSH password authentication instead of key files, hence the length. See the explanation of my pov in the below 'Password authentication is wrong, key files rule! (Disagreed️)' section. ### Your personal user For sshing your persistent instance imo a good practice would look like follows: ``` ssh -p 52112 lola@acme.web ``` * custom user, custom port Create an admin user: ``` # Ubuntu useradd --uid <*1111*> -N --home /home/.<*lola*> --shell /usr/bin/zsh -g admin -G users <*lola*> # CentOS useradd --uid <*1111*> -N --home /home/.<*lola*> --shell /bin/zsh -g wheel -G users <*lola*> ``` * mind to rewrite 'lola' to your nick * UID does not matter actually * zsh may not be your choice * mind that home directory is hidden in the above example (yes I think obscurity adds to security a bit)) Create your password as a random token. Eg.: ``` < /dev/urandom tr -cd '[:alpha:][:digit:]_!$%' | head -c30 ; echo "" # OR create it on the client device and copy or retype passwd <*lola*> ``` Tweaking the choice of special characters try to use the ones which are available for manual entering on different devices. ### SSHd mods Consider this: a) SSHd config is already properly set, b) there are many settings in there which makes sense to harden. So you can choose from leaving it as is to diving into tuning it to death (given you read a lot about the meaning and the effects). Consider the following quick mods: * custom port (52112 is an example) * disallowing root and any unexpected account to login > Note: A stupid mistake in sshd config may lock out your access forever. (Except that you may login via the providers console.) So: Open **two SSH connections** to the remote instance, one with the original default user and one with your new custom user. SSHd will not lock out a live connection in most situations even if reloaded with a broken configuration (failed restart will not kick your live sessions in most cases). Modify the below setting in the sshd config. The settings in `< >` are to be fixed by you. ``` micro /etc/ssh/sshd_config # mods: Port <*52112*> AllowUsers <*lola*> DenyUsers root guest test admin toor *ec2-user bitnami <...default accounts>* DenyGroups AllowGroups ``` * Mind to use the same extra port which you accepted with the local and external firewalls. For better cipher, mac and key exchange settings add the following to the end of the config: ``` KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256 Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com ``` You may also mod the below settings in the sshd\_config (**keep all the other original settings as they are!)**, walk thru it, edit (uncomment and edit) the mentioned below lines. (It's not the new content of the config, these are the lines in the original which I suggest to modify accordingly). ``` LoginGraceTime 45 PermitRootLogin no MaxAuthTries 6 MaxSessions 3 ClientAliveInterval 60 ClientAliveCountMax 2 ChallengeResponseAuthentication no GSSAPIAuthentication no AllowAgentForwarding no AllowTcpForwarding no GatewayPorts no X11Forwarding no PermitUserEnvironment no ``` If there are duplicate settings, SSH will take the last one, so check the whole config file. (Technically if you add the above to the end of the config file that will override the previous settings.) You may also disallow (with a leading hash) the SFTP subsystem unless you know `scp` is something you really need and can't solve it otherwise. (Consider that scp allows for dumping any amount of data from your system.) ``` #Subsystem sftp /usr/lib/openssh/sftp-server ``` Restart the service, check the status and review the effective settings: ``` systemctl restart sshd.service systemctl status sshd.service # check the listening port sshd -T # in case of errors: # - Ubuntu: tail -30 /var/log/syslog # - CentOS: tail -30 /var/log/messages # - both: journalctl -xe ``` Keeping the live SSH connections/terminals alive, open a new terminal and check if connecting with your custom admin user works: ``` ssh -p <52112> @ ``` Restart your instance and ssh into it with your user. ### Password authentication is wrong, key files rule! (Disagreed️️) You may wonder why don't I recommend to immediately forbid PasswordAuthentication and allow PubkeyAuthentication only?! You may prove the opposite, but I see no **practical** cryptographic strength difference between a **large random** password and a key. Beyond a certain level of entropy we care to put into our password. A random password and a key are both just a bunch of random bytes. From the practical point of view then we have a significant difference in handling the two kinds of secrets. A 20-30 char password you can even type in while looking at it stored in your password store well protected on an iOS device. With the keys there is always a drama of moving those around and protecting the key file. Unless you have an enterprise grade key store. So it's up to you. Obviously you can if you prefer so: ``` PasswordAuthentication no ``` > May you want and have an MFA capability to use a token like Yubikey — that would be the winning solution to heighten the level of assurance. ### Kill the root user When you are done with the ssh configuration, and your custom user safely logs in to the instance, it's time to kill the root. Not that it's absolutely a good idea, but you never know how it was misconfigured. :) 'Killing' is done by assigning a random password. Check interactive password holders (Ubuntu only): ``` passwd -Sa | grep P ``` Change the root password to a random one (Ubuntu, CentOS): ``` echo -n "root:$( < /dev/urandom tr -cd '[:lower:][:digit:]_!$%,.[=*=][=#=]' | head -c40 )" | chpasswd ``` Note that root can assign any password (despite the fine policy we created above), so be careful with issuing the command without tuning, or test what it does. If there were other interactive users assign random passwords to them unless you understand the reason for their interactive presence at your instance. Then services, folders, extensions ---------------------------------- Now you are done with installing the base system — bada-bing-bada-boom. Pproceed to installing your services. That may involve adding new available ports to the firewall and new users. But first make the standard folder structure. ### /tank My subjective practice is to create a `/tank` folder for storing custom stuff including the installation material: ``` mkdir -p /tank/packagez chown -R root:admin /tank/packagez # * under CentOS admin is wheel cd /tank/packagez ``` So the 'packagez' (or anything like that, 'install', you name it) will be the location to download the installation sources which you can return to and check later what was the source of the running softwares. Like: ``` wget [https://github.com/caddyserver/caddy/releases/download/v2.0.0-rc.1/caddy_2.0.0-rc.1_Linux_x86_64.tar.gz](https://github.com/caddyserver/caddy/releases/download/v2.0.0-rc.1/caddy_2.0.0-rc.1_Linux_x86_64.tar.gz) ``` ### /opt I would suggest to install your tools and service binaries to the /opt folder. This ensures that you can keep track of what additional software you deployed on the instance. Using caddy as an example: ``` cd /tank/packagez tar xf caddy -C /opt/caddy chown -R root:root /opt/caddy chmod -R 755 /opt/caddy /opt/caddy/caddy —help ``` > caddy is handy to fire an HTTPS service out of nothing: > > `caddy reverse-proxy --from acme.web --to localhost:9000` ### Configurations As for the location of configuration stuff I would suggest to follow the standard guides. Like mostly that suggests /etc/: ``` mkdir /etc/caddy touch /etc/caddy/Caddyfile chown -R root:caddy /etc/caddy mkdir /etc/ssl/caddy chown -R root:caddy /etc/ssl/caddy chmod 0770 /etc/ssl/caddy ``` ### Create a user for the service For the service users a good idea would be to not assign a working shell to them, like: ``` groupadd --system caddy useradd --system --gid caddy --create-home --home-dir /var/lib/caddy --shell /bin/false --comment "Caddy web server" caddy ``` * Mind to assign a fake shell to it. Alternatively: `--shell /usr/sbin/nologin` ### systemd Mostly your services will require automatic start. For that you presumably use systemd. Like: ``` wget https://raw.githubusercontent.com/caddyserver/dist/master/init/caddy.service -P /etc/systemd/system chmod 644 /etc/systemd/system/caddy.service micro /etc/systemd/system/caddy.service # General approach User=caddy Group=caddy ... PrivateTmp=true PrivateDevices=true ProtectHome=true ProtectSystem=full # Then: systemctl daemon-reload systemctl status caddy journalctl -u caddy -b -f tail -40 /var/log/caddy.log # when works systemctl enable caddy ``` > by: [@timurxyz](https://www.linkedin.com/in/timurx/) > > org: [secdev.eu](https://secdev.eu), [defdev.eu](https://defdev.eu) References, sources ------------------- * <https://stribika.github.io/2015/01/04/secure-secure-shell.html> * <https://en.wikipedia.org/wiki/Snappy_(package_manager)> * <https://linuxize.com/post/how-to-configure-and-manage-firewall-on-centos-8/> * Photo by Caroline Attwood on Unsplash
https://habr.com/ru/post/499494/
null
null
2,816
52.49
07 July 2009 06:12 [Source: ICIS news] By Bohan Loh (adds more details) SINGAPORE (ICIS news)--Petrochemical giants BASF and Sinopec will invest $1.4bn to expand their petrochemical complex at Nanjing in eastern China, about 40% more than its original budget for the project. The companies did not cite reasons for raising the estimated project cost. In a joint statement on Tuesday, BASF and Sinopec said the capacity of the companies’ steam cracker at the site would be raised to 740,000 tonnes/year by 2011 from the current 600,000 tonnes/year under the expansion plan. The project also includes building of 10 new downstream derivative units, they said. (Please see complete details below) “Engineering work for the expansion is in full swing,” the companies said. The companies' Nanjing petrochemical complex is being operated by BASF-YPC, a 50:50 joint venture (JV) between BASF and Sinopec’s flagship subsidiary Yangzi Petrochemical Co. “The routine cracker turnaround scheduled for 2010 will be used to tie in the expansion modules and integrate the production processes,” the joint statement said. The ?xml:namespace> The plan was approved by the Chinese government on 1 July. “The expansion conforms with “With this step, BASF-YPC will be among the most competitive sites of Sinopec operations,” he added. Meanwhile, Yangzi-BASF Styrenics Co Ltd (YBS), another JV between BASF and Sinopec, would be merged into BASF-YPC to further increase synergies within the YBS produces styrene monomer (SM), polystyrene (PS) and expandable polystyrene (EPS). BASF/Sinopec $1.4bn expansion plant includes the following: * Expansion of current 600,000 tonne/year cracker in * Expanding existing EO plant by 80,000 tonne/year and the construction of a new EO purification unit. * Construction of a new 80,000 tonne/year butyl glycol ether plant, a new 60,000 tonne/year non-ionic surfactants plant, and a new amines complex for the production of ethanolamines, ethyleneamines and dimethylethanolamine. * Construction of a new dimethylaminoethyl acrylate (DMA3) plant. * Construction of a new 60,000 tonne/year super-absorbent polymer (SAP) plant. * Expansion of the existing propionic acid and aldehyde plants. * Expansion of the existing 250,000 tonne/year oxo-C4 plant to 300,000 tonne/year. * Development of an integrated C4 complex, including a 100,000-120,000 tonne/year butadiene extraction plant, a 120,000 tonne/year 2-propylheptanol plant, an 80,000 tonne/year isobutene extraction plant, and a 50,000 tonne/year plant for highly reactive polyisobutene. ($1 = €0.72)
http://www.icis.com/Articles/2009/07/07/9230538/basf-sinopec-to-shell-out-40-more-on-nanjing-jv.html
CC-MAIN-2015-18
refinedweb
416
50.57
Test 2 from Fall 2016 Note, you may only refer to this site during the test. Do not look at other notes or code or Google, etc. Do not work together. This is not a group test. There is one problem (wow, how simple!) that is worth 10,000 points (oh no!) Create a bitbucket repo called csci221-test2. The answers are due by Tue Dec 13, 11:59pm. Task Create a novel variant of the hash map, known as a HushMup, that works as described below. Also create some Google Test test cases and a UML diagram. The deliverables are described again below. Background on hash maps A hash map stores key/value pairs. It generally works as follows: - Each key should have a corresponding “hash” number. This is often an integer or long. A hash function must exist to compute the hash for the key’s data type. For example, if the key is an integer, the hash function can just return the key itself. If the key is a string, the hash function must compute an integer value representing the string, e.g., by adding up all the ASCII values of the characters in the string. If the key is a float, a hash code may be generated by interpreting the bits in the float as an integer instead. And so on. - We will also need a way to compare two keys for equality. Usually .equals()(Java) or ==(C++) works just fine for simple key types like integers, strings, etc; for complex user-defined classes, the user must implement a .equals()or operator==()function. - The underlying storage for the hash map is either a linked list containing key/value pairs in each node or two arrays/vectors: one containing keys, one containing values, synchronized so that a key in position iin the first array corresponds to the value in position iin the second array. - The put(key, val)function works as follows: create the hash for the key; use this hash as an array/vector/list position (if the hash is large, compute hash % capacityto get a number that’s in the range of [0,capacity)of the array/vector/list). Now that we have an index, check if that index is empty in the array/vector/list. If so, save the key and value in those positions, and we’re done. If the slot is not empty, there is a collision, and you have some choices: move forward one slot, compute a new hash (new index) based on the first hash, etc. Somehow, find a position to save the key and value. Note, you will need to grow the array/vector/list if it’s full. - The get(key)operation works as follows: compute the hash, perhaps also compute hash % capacityto get an index, and check the array/vector/list at that position. If it’s empty, move forward one spot etc. (using the same logic for movement as put) until you run out of places to check; if still not found, return null. Otherwise, if there is something at that position, check that given keyequals key in that index. If they are equal, you found the correct index and can return the value at that index (perhaps from the other array of values). If not equal, more forward and check the next two keys for equality (again, using the same logic for movement as put). HushMup features - Any kind of key and value may be stored. Thus, a HushMup is a template class with potentially different types for key and val (but all keys are the same type and all vals are the same type). - HushMups use two arrays for the underlying storage (not vectors or linked lists). - HushMups have a third array, of booleans, indicating if a slot is occupied or not. We must do this because we cannot simply put nulls in the key array since there is no universal null value for arbitrary types (unlike Java objects which are actually pointers). - The constructor of HushMup must receive a function pointer to the hash function for the given key type. Since the key type can be anything (a template type) the HushMup code will not know which function to call for generating a hash. Thus, it must be told, with a function pointer. Read about function pointers from our course notes. Note that the function pointer type must refer to your template type. - HushMups support getand putwith the usual arguments (of course, template type arguments). Unlike Java, C++ does not treat every object as a pointer, so we cannot return null from getif the key is not found in the map. Thus, HushMups also have a has(key)function that returns true/false. At the top of the getmethod, an assertion is made that the map has the given key. Otherwise, the program crashes. Achieve this by doing the following: #include <cassert>, plus at the top of the get(key)method, add a line of code: assert(has(key));. Note, both hasand getmust check the occupiedarray: an old (deleted) key may be found in the keys array, but that key is only valid if the corresponding position in the occupiedarray is set to true. - HushMups have a del(key)method that, presumably, just marks the space as unoccupied. This function should also have assert(has(key)). - Whenever a HushMup needs to grow, the arrays double in size. The data in the arrays are copied into the new larger arrays, keeping their same positions. - The HushMup’s logic for handling collisions is to move forward one position, wrapping around to the other side when the end of the array is reached. - HushMups have a size()function that returns the number of key/value pairs in the map. This is not necessarily the same as the size of the arrays, known as capacity. Deliverables - Submit hushmup.h, containing the implementation described above. It’s a template class, so all the code belongs in the .hfile. - Write testhm.cppto test the implementation. Use the Google Test framework. Write tests for the following scenarios: - Create a HushMup with whatever types and check the size without adding anything. - Create a HushMup to store intkeys and values. Use an identity hash function (i.e., return the key as its own hash). Add the same key/value pair multiple times. Check that the size is 1, the hasfunction returns true for the key, and the getfunction retrieves the val for the key. - Add more key/value pairs (different keys and values) and check hasand getfor each. Also check that size()returns the right value. Be sure to have a case where enough different key/value pairs are added to cause the arrays to grow. - Now delete some values using deland check that everything comes out right. In particular, delete a key and then check hasagain for that same key. It should return false. - Create a HushMup that stores string keys and values. Write a hash function for strings that adds all the ASCII values of the characters in the string and returns the sum. Check that all the HushMup features work, just like before. - Be sure that your code works on Londo and does not leak any memory nor does it crash. Be sure you are using the assertdescribed previously. Be sure that your code compiles without any warnings. - Create a UML diagram documenting the HushMup class. You’ll only have one box in the diagram, but all the fields and functions should be representing, including their protection status (public, protected, private), field types, argument types, and return types. Do not install Google Test yourself. The Makefile below uses a pre-installed location. Use this Makefile. Note, your code files must be named hushmup.h and testhm.cpp: GTEST_DIR = /opt/gtest-1.7.0 CXX = g++ CXXFLAGS = -ansi -Wall -ggdb3 -isystem $(GTEST_DIR)/include -Wextra -lpthread GTEST_HEADERS = $(GTEST_DIR)/include/gtest/*.h $(GTEST_DIR)/include/gtest/internal/*.h GTEST_SRCS = $(GTEST_DIR)/src/*.cc $(GTEST_DIR)/src/*.h $(GTEST_HEADERS) all: testhm gtest-all.o: $(GTEST_SRCS) $(CXX) $(CXXFLAGS) -I$(GTEST_DIR) -c $(GTEST_DIR)/src/gtest-all.cc gtest_main.o: $(GTEST_SRCS) $(CXX) $(CXXFLAGS) -I$(GTEST_DIR) -c $(GTEST_DIR)/src/gtest_main.cc gtest.a : gtest-all.o $(AR) $(ARFLAGS) $@ $^ gtest_main.a : gtest-all.o gtest_main.o $(AR) $(ARFLAGS) $@ $^ testhm: testhm.cpp hushmup.h gtest_main.a $(CXX) $(CXXFLAGS) -o testhm testhm.cpp gtest_main.a .PHONY: clean clean: rm *.a *.o testhm Questions? Write me an email with any questions you have. As much as possible, I will submit my response to the whole class (via email) so that everyone benefits from your question.
http://csci221.artifice.cc/guide/test-2-fall2016.html
CC-MAIN-2017-30
refinedweb
1,431
75.1
[Share] func to get a dict of items, values from a ui.View that are not in ui.View All I am sharing is a one line function. custom_attrs. I have shared this before, but pretty sure I was only returning the attrs not the attrs and values. But ok, its not much. I have an example below that looks pretty lame. But I wanted it for getting custom attrs from a UIFile/pyui file. Just std python, but as a beginner I still struggle with different types of comprehensions. Btw, it works. Not to say it couldn't be written better/faster. import ui def custom_attrs(v): # returns a dict with k,v found in v(iew) that are not ui.View return {key:getattr(v, key) for key in list(set(v.__dict__) - set(vars(ui.View)))} class MyClass(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.a = None self.b = 1 if __name__ == '__main__': mc = MyClass(bg_color = 'white') mc.present('sheet') print(custom_attrs(mc))
https://forum.omz-software.com/topic/3635/share-func-to-get-a-dict-of-items-values-from-a-ui-view-that-are-not-in-ui-view
CC-MAIN-2021-49
refinedweb
171
71.41
Wiring up a web page to a backend database can be tedious. You have to use some type of data access layer (DAL) and then map this to fields on a page. This is repetitive and error prone code. There are numerous ORM tools that allow you to interface with a database but this does not help you with loading controls, prompts, and validators on a page. This article will explore using nHydrate as a DAL that provides copious amounts of metadata and thus provides a release from some of the tedium of manually wiring up a web page. nHydrate First a short explanation of nHydrate, this is a platform that allows you to develop software in a model driven way (or domain driven design, DDD). You design a model that contains entities and relations. There are generators that read the model and create code based on that model. When you need to make changes to your application design, simply change your model and re-generate. A database installer controls database changes and versioning. You never touch the database. The database is not the model. You have a real model. It really does cut down on the errors we all make when performing repetitive tasks. nHydrate The Entity Framework generated DAL provides metadata that can be used to automate many of the tasks that we perform on a web page. We will see how to create prompts and validators, as well as bind data to controls in a compile-time checked manner. Using the compiler is a big plus. I always try to put as much functionality in as possible. If you add code in markup via script or other mechanism, there will be issues when you change your data model or perform a host of other activities. Code is king; the compiler checks it! This sample depends on a little code I have written around a model. A nHydrate model is used to create a DAL that interacts with the database. The generated Entity Framework layer has additional extensions, attributes, and metadata around it to facilitate writing applications quickly. My "ORMMap" class performs all of the generic functionality I need to make a web page work. This class is not part of the generated framework as it is very specific to my implementation. How you choose to bind or map a user interface is specific, based on your technology such as web forms, MVC, WinForms, etc. You will need to write a little connection code like I have done. However, the metadata around the DAL makes this easy. ORMMap In this sample, I can bind labels, textboxes, check boxes, radio buttons, lists, and dropdowns. I can populate dropdowns, check lists and radio lists as well. There is very little code on my primary forms and the screens still have load/save functionality with validation. The connection code I wrote has only a few methods. The first allows you to load a combo box. In the page initialize, I have added the list load code. This has the added benefit that I do not need viewstate. In fact, all of the pages in the sample have viewstate turned off. The way we are binding here does not require viewstate at all. Let us look at an add/edit page for a "Region" object first. Our model has 3 entities: Customer, Country, and Region. A Customer has [0..1] Countries and [0..1] Regions. This is a very simple model used to demonstrate how to can bind objects with associations. viewstate viewstate Region Customer Country Customer Countries Regions My project has two base pages from which I derive all pages. The first is BasePage which does nothing. It is merely a base. The second is BasePersistablePage which is used for edit pages and has core functionality for loading and saving. It has three main methods: SetupBindings, CreateObject, and SaveData. An edit page overrides these methods to add any custom code needed to the bind. Below is a class diagram of my main site objects. You can see an edit and list page along with their base types. The "ORMMap" object is expanded to show its methods. In this class, there is no code that deals with specific objects (list Region or Customer). This class can be used from any page to handle any generated object. BasePage BasePersistablePage SetupBindings CreateObject SaveData Region Keep in mind that these pages are not part of the generated framework. There is no reason to use my syntax or code. I have created the "ORMMap" class and the base pages to abstract out the functionality of loading and saving nHydrate objects. I mention this because it might be a strange syntax for you and I have seen enough code generators to know they use weird and cumbersome syntax that confuses me. nHydrate is straight Entity Framework. If you have used EF, then there is nothing foreign in the objects I am using in code. Below are two screen shots, the first of a list of Region objects. The second is an edit page of a Customer. I have included the customer edit screen because it is the most complicated and contains dependent objects. The screen is quite simple in that it is just a list of controls that allows for data entry. Let us start with the Region object because it is very simple. It has an ID and a Name property. This is the classic look-up table. Databases are full of these for states, countries, regions, etc. They have just a primary key and a name. When a user edits one of these objects, he is really only editing the name as the primary is never shown or edited. In the region edit page, I override the "SetupBindings" method. The database context is passed to me so I use it to look up the actual object I wish to bind. I am pulling the primary key off of the URL string. This sample website uses a URL structure like "/regionedit.aspx?id=2". This makes it easy for users to bookmark a page. In the code, I also change the header to indicate to the user whether this is a create or edit operation. ID Name /regionedit.aspx?id=2 Please notice the last line of code. This will bind the textbox named "txtName" to the region object. All generated entities have associated enumerations created that map to their fields. This allows you write methods that can work on specific fields via enumeration. These are always in sync because they are generated from your nHydrate model, just like the objects themselves. This line of code is mapping the "Name" field of a Region object to the textbox. This one line of code will two-way bind the EF object to the UI control. There are no magic strings here. I did not put the field designation "Name" in quotes (like a dataset). I used the provided enumeration for the Region object. txtName Name dataset There is only one line of code to map the textbox since there is only one textbox. You will need one line of code for each control. There are numerous overloads, one for each control type of course. You could add more complicated overloads for other .NET controls or even third party controls you buy. Simply add an overload for a new control type and add whatever logic is necessary to handle the new control type. textbox textbox protected override void SetupBindings(EFExampleEntities context) { int id = this.Request.GetURLInteger("id"); var item = context.Region.Where(x => x.RegionId == id).FirstOrDefault(); lblHeader.Text = "Edit Region"; if (item == null) { lblHeader.Text = "New Region"; item = CreateObject(context) as Region; } this.Mapper.Map(txtName, item, Region.FieldNameConstants.Name); } There is not much more code on the page than that. There is a "CreateObject" method on the page that is called when the base page needs an object created, but this just creates a new object and adds it to the context by default. If you have more complex business rules to perform when adding this object type, you could add it or call that logic from here. My entire Region edit page is about 50 lines of code. I would argue that this is really small. Keep in mind that any necessary validation that is defined in the model would automatically be added. The Customer edit page is a bit more complex of course but has the same structure. We must load additional controls so it makes sense to look at its code too. In this code, we see the "InitializeList" method used. This is called to load a dropdown with the list of values from which the user can choose one. We simply pass in the control, the list to bind and the fields that will be used for text and value. These are all loaded in the OnInit event so there is no need for viewstate in my example. Later in the code, we bind each control individually to the appropriate property on a specific object. In this sample, all fields bind to the same object; however this is not necessary. We could just as easily bind some of the UI controls to dependent objects off the primary or any other EF object we wish. InitializeList OnInit protected override void SetupBindings(EFExampleEntities context) { int id = this.Request.GetURLInteger("id"); var item = context.Customer.Where(x => x.UserId == id).FirstOrDefault(); lblHeader.Text = "Edit Customer"; if (item == null) { lblHeader.Text = "New Customer"; item = CreateObject(context) as Customer; } //Load the dropdowns this.Mapper.InitializeList(cboCountry, context.Country.OrderBy(x => x.Name).ToList(), Country.FieldNameConstants.Name.ToString(), Country.FieldNameConstants.CountryId.ToString()); this.Mapper.InitializeList(cboCustomerType, context.CustomerType.OrderBy (x => x.Name).ToList(), CustomerType.FieldNameConstants.Name.ToString(), CustomerType.FieldNameConstants.CustomerTypeId.ToString()); this.Mapper.InitializeList(cboRegion, context.Region.OrderBy(x => x.Name).ToList(), Region.FieldNameConstants.Name.ToString(), Region.FieldNameConstants.RegionId.ToString()); //Bind the fields this.Mapper.Map(txtFirstName, item, Customer.FieldNameConstants.FirstName, lblFirstName); this.Mapper.Map(txtLastName, item, Customer.FieldNameConstants.LastName, lblLastName); this.Mapper.Map(txtCity, item, Customer.FieldNameConstants.City, lblCity); this.Mapper.Map(txtAddress, item, Customer.FieldNameConstants.Address, lblAddress); this.Mapper.Map(txtCode, item, Customer.FieldNameConstants.Code, lblCode); this.Mapper.Map(txtCompany, item, Customer.FieldNameConstants.CompanyName, lblCompany); this.Mapper.Map(txtEmail, item, Customer.FieldNameConstants.Email, lblEmail); this.Mapper.Map(txtFax, item, Customer.FieldNameConstants.Fax, lblFax); this.Mapper.Map(txtPhone, item, Customer.FieldNameConstants.Phone, lblPhone); this.Mapper.Map(txtPostalCode, item, Customer.FieldNameConstants.PostalCode, lblPostalCode); this.Mapper.Map(cboCountry, item, Customer.FieldNameConstants.CountryId, lblCountry); this.Mapper.Map(cboCustomerType, item, Customer.FieldNameConstants.CustomerTypeId, lblCustomerType); this.Mapper.Map(cboRegion, item, Customer.FieldNameConstants.RegionId, lblRegion); } Now let's look at a list page. The region list page shows a paginated list of regions from which you can choose to edit or delete items. One of the big issues with displaying lists to pagination. There is no need to worry about this anymore as nHydrate removes most of the drudgery from you. The following code pulls the page offset and records per page from the URL string and uses it to pull a list of data from the database. I am using a PagingURL class I have written to parse the URL into the segments I care about. The paging object holds the page offset and records per page and passes this information into the data query method. When this call returns, there is an additional property on the paging object named "TotalRecords" that will contain a count of all records that match the where condition, thus allowing you to build a pager control on screen with current page, total pages, records per page, and total record count. string PagingURL TotalRecords Once the page of data is returned, I simply bind it directly to a grid. I have built a paging control that I use on list pages and it simply takes the returned paging information and builds a stylized pager. This is all that is really needed to build a list/edit data entry website. grid using (var context = new EFExampleEntities()) { var url = new PagingURL(this.Request.Url.AbsoluteUri); var paging = new Widgetsphere.EFCore.DataAccess.Paging (url.PageOffset, url.RecordsPerPage); grdItem.DataSource = context.Region.GetPagedResults(x => x.Name, paging); grdItem.DataBind(); PagingControl1.Populate(paging); } Now let's look at how we can pull the metadata out of the generated objects. The "ORMMap" class is really nothing more than a list of controls that we associate with a field on an Entity Framework object. Each nHydrate Entity Framework object has an associated metadata class that can be pulled of via attributes and used to pull model information at runtime. Common metadata would be Display Name, Regular Expression, and Required. The display name is a friendly name you define for a field. So, for example "FirstName" in the database might be "First Name" or "PostalCode" might be "Zip code". This can be used for prompts and validation controls. To pull the display name from a generated object, you can use the following code. The entity passed in is the base nHydrate EF object. All generated objects are derived from this type, so this routine will work on any of them. private string GetDisplayName(Enum field) { var context = new EFExampleEntities(); var metadata = context.GetMetaData(context.GetEntityFromField(field)); var a = metadata.GetType().GetField(field.ToString()).GetCustomAttributes (typeof(System.ComponentModel.DataAnnotations.DisplayAttribute), true).FirstOrDefault(); if (a == null) return field.ToString(); else return ((System.ComponentModel.DataAnnotations.DisplayAttribute)a).Name; } If there was not display name defined in the model, the database name is used. You can do the same thing to pull off the validation, regular expression and the required (non-nullable) attribute. private bool IsNullable(Enum field) { var context = new EFExampleEntities(); var metadata = context.GetMetaData(context.GetEntityFromField(field)); var a = metadata.GetType().GetField(field.ToString()).GetCustomAttributes (typeof(System.ComponentModel.DataAnnotations.RequiredAttribute), true).FirstOrDefault(); return (a != null); } private string GetRegularExpression(Enum field) { var context = new EFExampleEntities(); var metadata = context.GetMetaData(context.GetEntityFromField(field)); var a = metadata.GetType().GetField(field.ToString()).GetCustomAttributes (typeof(System.ComponentModel.DataAnnotations.RegularExpressionAttribute), true).FirstOrDefault(); if (a == null) return string.Empty; else return ((System.ComponentModel.DataAnnotations.RegularExpressionAttribute)a). Pattern; } Notice the pattern is simply to get the metadata class of an object which can be retrieved via an extension method on all generated objects. The metadata class has all fields for the tracked object on it with attributes defining these data points like required, expression, min-max ranges, etc. You can use the descriptor class to build complex UI structures with non-specific code that works for all of your entities for any model. You could abstract this out to a common assembly that you use for all projects. Another important and time-saving function point to use is to emit out validators. A common issue with UI development is knowing which fields should have validation on them and keeping up with it as the DBA changes the database or project managers change business rules. It is easy to forget to change some arbitrary screen in the midst of all other development especially since these are business rules and not errors. Now a simple and elegant solution is at hand. Simply use the metadata to emit the validations on the page. If someone changes the nHydrate model and re-generates, well the validators are now model-driven so they will reconfigure themselves with no code changes. The following routine returns a list of validators that you can add to the controls collection of a form. Simply call this method to get a list to dynamically add to your page. public virtual IEnumerable<system.web.ui.control> GetValidators() { var retval = new List<system.web.ui.control>(); foreach (var control in _controlList.Keys) { var e = _controlList[control]; //Determine if we need a required field validator if (NeedsValidatorRequired(e.Entity, e.Field, e.CanHaveRequiredValidation, e.ForceRequiredValidation)) { var r1 = new RequiredFieldValidator { ControlToValidate = control.UniqueID, Display = ValidatorDisplay.None, ErrorMessage = "The '" + GetDisplayName (e.Entity, e.Field) + "' is required!" }; retval.Add(r1); } //Determine if we need a regular expression validator if (NeedsValidatorExpression(e.Entity, e.Field)) { var r1 = new RegularExpressionValidator { ControlToValidate = control.UniqueID, Display = ValidatorDisplay.None, ErrorMessage = "The '" + GetDisplayName (e.Entity, e.Field) + "' is not the correct format!", ValidationExpression = GetRegularExpression (e.Entity, e.Field) }; retval.Add(r1); } } return retval; } Notice that it checks to determine if a field is required and if so emits a required validator. It then determines if the Entity.Field has a validation expression and if so emits a regular expression validator. You can make this as specific to your situation as necessary. These were just two very easy ones to add. Also keep in mind that this is all generic code. This does not work on any one specific object like Country or Customer. It works on all objects in any model you can build. Entity.Field Country This simple example demonstrates how you can create quite robust data entry screens with very little code. The whole idea is to use metadata to drive your application. This is much easier with model driven development. If your model supports extra metadata properties and is not just an ORM-mapper you can handle quite complex scenarios quite generically. This will get you started with model driven development of a web UI. This technique allows you to keep your markup small. We did not add validators manually and I did not even mention that the MaxLength property of text boxes are set. This ensures that users cannot enter data that is too big for the field. Information that can be defined in metadata can be emitted out as code and mark-up at runtime, thus reducing the code you write and the associated mistakes. MaxLength.
https://www.codeproject.com/Articles/205099/Binding-Web-Pages-with-nHydrate?fid=1630703&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None
CC-MAIN-2017-30
refinedweb
2,944
59.3
I'm tinkering around with OpenCV and I'm having some trouble figuring out what fps I should be recording webcam footage. When I record it at 15 fps import cv2 cap = cv2.VideoCapture(0) # Define the codec and create VideoWriter object fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') fps = 15.0 # Controls the fps of the video created: todo look up optimal fps for webcam out = cv2.VideoWriter() success = out.open('../assets/output.mp4v',fourcc, fps, (1280,720),True) while(cap.isOpened()): ret, frame = cap.read() if ret==True: frame = cv2.flip(frame,1) # write the flipped frame out.write(frame) cv2.imshow('frame',frame) # If user presses escape key program terminates userInput = cv2.waitKey(1) if userInput == 27: break else: break # Release everything if job is finished cap.release() out.release() cv2.destroyAllWindows() Let's say your camera recording with 25 FPS. If you are capturing 15 FPS while your camera is recording with 25 FPS, the video will be approximately 1.6 times faster than real life. You can find out frame rate with get(CAP_PROP_FPS) or get(CV_CAP_PROP_FPS) but it's invalid unless the source is a video file. For cameras or webcams you have to calculate(estimate) FPS programmatically: num_frames = 240; # Number of frames to capture print "Capturing {0} frames".format(num_frames) start = time.time()# Start time # Grab a few frames for i in xrange(0, num_frames) : ret, frame = video.read() end = time.time() # End time seconds = end - start # Time elapsed print "Time taken : {0} seconds".format(seconds) # Calculate frames per second fps = num_frames / seconds; print "Estimated frames per second : {0}".format(fps); So this program estimates frame rate of your video source by recording first 240 frames as sample and then calculates delta time. Lastly, the result of a simple division gives you the FPS.
https://codedump.io/share/YK0eHkjqhYqc/1/what-is-the-optimal-fps-to-record-an-opencv-video-at
CC-MAIN-2017-09
refinedweb
301
69.28
18/01/2012 at 09:29, xxxxxxxx wrote: User Information: Cinema 4D Version: R12-R13 Platform: Windows ; Mac OSX ; Language(s) : C++ ; --------- I can't find any reference to Tcaik (except for its own .h/.res/.str files) and definitely no Plugin ID to use to make one. I've scanned the entire Cinema 4D R13 install and it didn't find a reference to the Plugin ID (not even in the c4d_symbols.h!). And it is not available as a Command so I'm at a loss (except for a bit of coding to display the tag's Plugin ID myself). On 18/01/2012 at 10:01, xxxxxxxx wrote: Not sure if this is what you're asking for: #include "..\..\..\..\resource\modules\ca\res\description caik.h" BaseDocument *doc = GetActiveDocument(); BaseObject *obj = doc->GetActiveObject(); if(!obj) return FALSE; obj->MakeTag(1019561, NULL); //Create a new IK tag BaseTag *newtag = obj->GetFirstTag(); //Find the new tag and give it a new variable newtag->SetParameter(DescID(ID_CA_IK_TAG_ENABLE), GeData(TRUE), DESCFLAGS_SET_0); //Enable "Use Ik" option -ScottA On 18/01/2012 at 10:09, xxxxxxxx wrote: Where did you find the Plugin ID? It is not anywhere in the install (incl. resource folder). And it didn't show up in the Command Manager which shows the Plugin ID. On 18/01/2012 at 10:28, xxxxxxxx wrote: I created the tag by hand. Then dragged it into the bottom of script console window where it displayed this: temp(1). So I edited that to this: println(temp(1)->GetType()) Which gave me the ID#1019561 in the console. I went searching for that number using windows search tool. And oddly enough. I found it listed in a file called "interface_icons.txt" of all things. *Seems like a strange place to enumerate ID# to me personally. On 18/01/2012 at 11:53, xxxxxxxx wrote: Thanks for the information. I hate when they do this. It makes our work a bit more difficult, mystical and an art-form instead of being a regimented, clear-cut process. Now I can convert my IK chains to R12+ IK chains. On 18/01/2012 at 12:09, xxxxxxxx wrote: NP. I'm just thrilled I could finally help you out for once. Rather than you always helping me out. On 19/01/2012 at 01:31, xxxxxxxx wrote: Originally posted by xxxxxxxx I went searching for that number using windows search tool. And oddly enough. I found it listed in a file called "interface_icons.txt" of all things. *Seems like a strange place to enumerate ID# to me personally. Originally posted by xxxxxxxx "interface_icons.txt" is just a file listing all the interface icons ID#.
https://plugincafe.maxon.net/topic/6192/6527_making-an-ik-tag-tcaik
CC-MAIN-2021-25
refinedweb
449
75.61
EmfIndex Comparison This page is intended to compare the two index implementations form different viewpoints. Contents - 1 Performance - 1.1 Indexing - 1.2 Query response time - 2 Convenient query API Performance Performance measurement was done on a T61 laptop. Indexing Indexing time This test measured the time which is needed to index x-times the content of Ecore.ecore. (Containing 393 instances of EObject and 520 references) Index save time (logarithmic scale on x and y axis) This test measured the time which is needed to dump the index to file system. Memory consumption This test measured the in memory size of the index by use of the Memory Analyzer Tool. In the SAP case, paging was disabled Required Memory on Disc These values show the amount of kilobytes per Ecore content. Query response time Query All Resources, EObject and all EReferences Query all references targeting a certain resource (logarithmic scale on x and y axis) Query all instances of "EClass" (logarithmic scale on x and y axis) These values show how fast the queries respond that are required to simulate interconnected descriptors.. This method gets an iterable with low level descriptors. The QueryResult only implements the Iterable interface and has no further methods. like as the low level API: public class ConvenientUser { public void test() { final ConvenientResource); } }); } }
http://wiki.eclipse.org/EmfIndex_Comparison
CC-MAIN-2016-26
refinedweb
218
53.41
Creating an Application in Kivy: Part 5 In Part 4, we took a break from developing Kivy interfaces and focused on improving the interface we already had. In this part, we’ll get back to interface development by creating a widget to display the buddy list. Here’s the plan: we’ll start with a new widget to render the information we’re getting from the roster. Then we’ll figure out how to query sleekxmpp to get additional information about the user, such as their online status and status message. If you haven’t been following along or want to start with a clean slate, you can clone the state of the example repository as of the end of part 4: git clone git checkout end_part_four Remember to create and activate a virtualenv populated with the appropriate dependencies, as discussed in part 1. Table Of Contents If you’re just joining us, you might want to jump to earlier steps Rendering A Basic ListView So far, after successful login, a list of usernames is printed on the ConnectionModal popup. That’s not quite what we want to do. Rather, we want to dismiss the popup and render the buddy list in the root window instead of the login form. The first thing we’ll have to do here is wrap the login form in a new root widget. We should have done this from the start, but to be honest, I didn’t think of it. This widget will be a sort of manager for the AccountDetailsForm, BuddyList, and ChatWindow. If we only ever wanted to show one window at a time, we could use Kivy’s ScreenManager class to great effect. However, I hope to display widgets side by side if the window is wide enough, so let’s try coding it up manually. First we can add an OrkivRoot (leaving it empty for now) class to __main__.py. Let’s add a BuddyList class while we’re at it, since we’ll need that shortly. from kivy.uix.boxlayout import BoxLayout # at top of file class BuddyList(BoxLayout): pass class OrkivRoot(BoxLayout): pass Next, we update the orkiv.kv to make OrkivRoot the new root object (replacing the current AccountDetailsForm: opening line) and make any instances of OrkivRoot contain an AccountDetailsForm: OrkivRoot: <OrkivRoot>: AccountDetailsForm: Now let’s add a show_buddy_list method to the OrkivRoot class. This method will simply clear the contents of the widget and construct a new BuddyList object for now: class OrkivRoot(BoxLayout): def show_buddy_list(self): self.clear_widgets() self.buddy_list = BuddyList() self.add_widget(self.buddy_list) And finally, we can remove the temporary code that renders buddy list jabberids on the ConnectionModal label and replace it with a call to show_buddy_list: def connect_to_jabber(self): app = Orkiv.get_running_app() try: app.connect_to_jabber(self.jabber_id, self.password) app.root.show_buddy_list() self.dismiss() Note that we also explicitly dismiss the ConnectionModal. The new BuddyList widget, which is currently blank, will be displayed as soon as we have a valid connection. Let’s add some styling to that BuddyList class in our orkiv.kv file. We’ll add a ListView which provides a scrollable list of widgets. Of course, we’re not actually putting anything into the ListView, so if we run it now, it won’t show anything… but it’s there! <BuddyList>: list_view: list_view ListView: id: list_view Notice that I gave the ListView an id property and connected it to a list_view property on the root widget. If you remember part 3 at all, you’re probably expecting to add an ObjectProperty to the BuddyList class next. You’re right! class BuddyList(BoxLayout): list_view = ObjectProperty() def __init__(self): super(BuddyList, self).__init__() self.app = Orkiv.get_running_app() self.list_view.adapter.data = sorted(self.app.xmpp.client_roster.keys()) Since you guessed that ObjectProperty was coming, I added an initializer as well. I have to keep you interested, after all! The initializer first sets up the superclass, then sets the list_view‘s data to the buddy list keys we’ve been using all along. There’s a couple complexities in this line of code, though. First note the call to sorted, which accepts a python list (in this case containing the ids in the buddy list) and returns a copy of the list in alphabetical order. Second, we are setting the list_view.adapter.data property. Underneath the hood, ListView has constructed a SimpleListAdapter for us. This object has a data property that contains a list of strings. As soon as that list is updated, the adapter and ListView work together to update the display. The reason that the data is stored in an adapter is to separate the data representation from the controller that renders that data. Thus, different types of adapters can be used to represent different types of data. It’s possible to construct your own adapter as long as you implement the relevant methods, but this is typically unnecessary because the adapters that come with Kivy are quite versatile. Try running this code. Upon successful login, the modal dialog disappears and the buddy list pops up in sorted order. Resize the window to be smaller than the list and you can even see that the ListView takes care of scrolling. The SimpleListAdapter is not sufficient for our needs. We need to be able to display additional information such as the user’s status and availability, and we want to lay it out in a pretty way. We’ll also eventually want to allow selecting of a Buddy in the list so we can chat with them. SimpleListAdapter supports none of this. However, Kivy’s ListAdapter does. So let’s edit the orkiv.kv to use a ListAdapter for the BuddyList class: # at top of file: #:import la kivy.adapters.listadapter # at top of file #:import lbl kivy.uix.label <BuddyList>: list_view: list_view ListView: id: list_view adapter: la.ListAdapter(data=[], cls=lbl.Label) First, we import the listadapter module using the name la. This has a similar effect to a from kivy.adapters import listadapter as la line in a standard Python file. This #:import syntax can be used to import any python module that you might need inside the kv language file. The syntax is always #:import <alias> <module_path>. However, you can only import modules, you can’t import specific classes inside modules. This is why I used a rather uninformative variable name ( la). When we construct the ListAdapter at the end of the snippet, the shortened form happens to be more readable, since we are immediately following it by a class name. We also import the label class as lbl. While it seems odd to have to import Label when we have seen Label used in other parts of the KVLanguage file, we have to remember that sometimes KV Language allows us to switch to normal Python. That’s what’s happening here; the code after adapter: is normal python code that doesn’t know about the magic stuff Kivy has imported. So we have to import it explicitly. This is probably not the best way to do this; we’re probably going to have to replace this view with a custom ListView subclass later, but it suits our needs for the time being. That code is simply constructing a new ListAdapter object. It assigns empty data (that will be replaced in BuddyList.__init__) and tells the ListAdapter that the cls to render the data should be a Label. So when we run this code, we see a scrollable list of Labelcode> objects each with their text attribute automatically set to one of the jabber ids we set up in the BuddyList initializer. However, a Label still isn’t the right widget for rendering a buddy list item. We need to show more information in there, like whether they are available or away and what their status message is. A label won’t cover this, at least not cleanly, so let’s start writing a custom widget instead. First, add a BuddyListItem class to the __main__.py: from kivy.properties import StringProperty # At top of file class BuddyListItem(BoxLayout): text = StringProperty() By default, ListAdapter passes a text property into the widget that is used as its cls. So we hook up such a property which will be rendered by the label in the KV language file. This is a StringProperty, which behaves like any other property but is restricted to character content. Now we can use this class in the KV Language file: # at top of file: #:import ok __main__ <BuddyListItem>: size_hint_y: None height: "100dp" Label: text: root.text <BuddyList>: list_view: list_view ListView: id: list_view adapter: la.ListAdapter(data=[], cls=ok.BuddyListItem) Don’t forget to make the adapter cls point at the new BuddyListItem class. You can also remove the Label import at the top. We set a height property on the BuddyListItem mostly just to demonstrate that the custom class is being used when we run the app. Notice also how the label is referencing the root.text property we set up in the python file. This property is being set, seemingly by magic, to the value from the data list we set on ListAdapter. In fact, it’s not magic; Kivy is just doing a lot of work on our behalf. For each string in that list, it’s constructing a new BuddyListItem object and adding it to that scrollable window. It then sets properties on that object using its default arg_converter. The arg_converter is responsible for taking an item from a data list and converting it to a dictionary of properties to be set on the cls object. The default is to simply take a string and set the text property to that string. Not so magic after all. And now we’re going to make it less magic. Instead of a text property, let’s make a few properties that better reflect the kind of data we want to display: class BuddyListItem(BoxLayout): jabberid = StringProperty() full_name = StringProperty() status_message = StringProperty() online_status = StringProperty() Don’t try running this without updating the KV file, since it’s trying to reference a text property that is no longer there: <BuddyListItem>: size_hint_y: None height: "40dp" Label: text: root.jabberid Label: text: root.full_name Label: text: root.status_message Label: text: root.online_status Of course, running it in this state will bring us an empty window, since the ListAdapter is still using the default args_converter. Let’s add a new method to the BuddyList class. Make sure you’re sitting down for this one, it’s a bit complicated: def roster_converter(self, index, jabberid): result = { "jabberid": jabberid, "full_name": self.app.xmpp.client_roster[jabberid]['name'] } presence = sorted( self.app.xmpp.client_roster.presence(jabberid).values(), key=lambda p: p.get("priority", 100), reverse=True) if presence: result['status_message'] = presence[0].get('status', '') show = presence[0].get('show') result['online_status'] = show if show else "available" else: result['status_message'] = "" result['online_status'] = "offline" return result Deep breaths, we’ll explore this code one line at a time. First, the new method is named roster_converter and takes two arguments, the index of the item in the list (which we’ll ignore for now), and the jabberid coming in from the data list. These are the normal arguments for any ListAdapter args_converter in Kivy. Next, we start constructing the result dictionary that we’ll be returning from this method. Its keys are the properties in the BuddyListItem class. It’s values, of course, will be displayed on the window. Then we ask sleekxmpp to give us presence information about the user in question. This is not a trivial piece of code. I don’t want to spend a lot of time on it, since it involves ugly Jabber and sleekxmpp details, and our focus is on Kivy. In fact, you can skip the next paragraph if you’re not interested. First, sleekxmpp’s client_roster.presence method returns a dictionary mapping connected resources to information about that resource. For simplicity, we’re ignoring resources here, but the basic idea is that you can be connected from two locations, say your cell phone and your laptop, and you’ll have two sets of presence information. Since we don’t care about resources, we ignore the keys in the dictionary and ask for just the values(). However, we still need to pick the “most important” resource as the one that we want to get presence information from. We do this by wrapping the list in the sorted function. Each resource has an integer priority, and the highest priority is the resource we want. So we tell the sorted function to sort by priority, passing it a key argument that gets the priority using a lambda function. The function defaults to 100 if the resource doesn’t specify a priority; thus we prefer an empty priority over any other value. The reversed keyword puts the highest priority at the front of the list. Seriously, I hope you skipped that, because it’s a lot of information that is not too pertinent to this tutorial, even if the code it describes is. It took me a lot of trial and error to come up with this code, so just be glad you don’t have to! So, assuming a list of resources is returned for the given jabber id, we pick the front one off the list, which is the one deemed to have highest priority. We then set the status_message directly from that item, defaulting to an empty string if they user hasn’t set one. The show key in this dict may return the empty string if the user is online (available) without an explicit status, so we check if the value is set and default to “available” if it is not. However, there’s a chance that call returns a completely empty list which tells us what?: That the user is not online. So we have that else clause constructing a fake presence dict indicating that the user is offline. Finally, things become simple again. We return the constructed dictionary that maps property names to values, and let Kivy populate the list for us. All we have to do for that to happen is tell Kivy to use our custom args_converter instead of the default one. Change the adapter property in the KV Language file to: adapter: la.ListAdapter(data=[], cls=ok.BuddyListItem, args_converter=root.roster_converter) Notice that I split the command into two lines (indentation is important, if a bit bizarre.) in order to conform to pep 8‘s suggestion that line length always be less than 80 characters. Now, finally, if you run this code and login, you should see all the information about your buddies pop up! However, it’s still a wee bit ugly. Let’s find some ways to spice it up. One simple thing we can do is highlight different rows by giving each one a background color. We can start by adding a background attribute as an ObjectProperty to the BuddyListItem. Then we can set this value inside the roster_converter: if index % 2: result['background'] = (0, 0, 0, 1) else: result['background'] = (0.05, 0.05, 0.07, 1) Remember that index parameter that is passed into all args_converters and we were ignoring. Turns out it comes in handy. We can take the modulo (that’s the remainder after integer devision if you didn’t know) 2 of this value to determine if it’s an even or odd numbered row. If it’s odd (the remainder is not zero), then we set the background to black. Otherwise, we set it to almost black, a very deep navy. The background color is set as a tuple of 4 values, the RGBA value (Red, Green, Blue, Alpha). Each of those takes a float between 0 and 1 representing the percentage of each of those to add to the color. Now, most Kivy widgets don’t have a background attribute, so we’re going to have to draw a full screen rectangle on the widget’s canvas. This is great news for you, because we get to cover the canvas or graphics instructions, something this tutorial has neglected so far. The canvas is a drawing surface that accepts arbitrary drawing commands. You can think of it as lower level than widget, although in reality, widgets are also drawn on the canvas, and every widget has a canvas. Here’s how we set up a rectangle on the BuddyListItem canvas: canvas.before: Color: rgba: self.background Rectangle: pos: self.pos size: self.size I put this code above the labels in the BudyListItem. The canvas.before directive says “issue this group of commands before painting the rest of the widget”. We first issue a Color command, setting its rgba attribute to the 4-tuple we defined as the background. Then we issue a Rectangle instruction, using the currently set Color and setting the position and size to be dynamically bound to the size of the window. If you resize the window, the rectangle will be automatically redrawn at the correct size. Another thing I want to do to improve the interface is render status messages underneath the username. They could go in a smaller font and maybe different color. This can be done entirely in KV Language: <BuddyListItem>: size_hint_y: None height: "75dp" canvas.before: Color: rgba: self.background Rectangle: pos: self.pos size: self.size BoxLayout: size_hint_x: 3 orientation: 'vertical' Label: text: root.jabberid font_size: "25dp" size_hint_y: 0.7 Label: text: root.status_message color: (0.5, 0.5, 0.5, 1.0) font_size: "15dp" size_hint_y: 0.3 Label: text: root.full_name Label: text: root.online_status I changed the height of the rows to be more friendly. The canvas I left alone, but I added a BoxLayout around two of the labels to stack one above the other in a vertical orientation. I made this BoxLayout 3 times as wide as the other boxes in the horizontal layout by giving it a size_hint_x. The two labels also get size hints, font size, and a color. When we run this it looks… a little better. Actually, it still looks like ass, but this is because of my incompetence as a designer and has nothing to do with Kivy! Now one more thing I’d like to do is render a status icon instead of the text “available”, “offline”, etc. Before we can do that, we need some icons. I wanted something in the public domain since licensing on this tutorial is a bit hazy. A web search found this library and I found some public domain icons named circle_<color> there. I grabbed four of them using wget: cd orkiv mkdir icons wget -O icons/available.png wget -O icons/offline.png wget -O icons/xa.png wget -O icons/away.png Once the files are saved, all we have to do to render them is replace the online_status label with an Image: Image: source: "orkiv/icons/" + root.online_status + ".png" And that’s it for part 5. I promised we would render a complete buddy list, and that’s what we’ve done! In part 6, we’ll make this list interactive. Until then, happy hacking! Monetary feedback If you are enjoying this tutorial and would like to see similar work published in the future, please support me. I plan to one day reduce the working hours at my day job to devote more time said books - Tip me on Gittip Finally, if you aren’t in the mood or financial position to help fund this work, you can always share it on your favorite social platforms. [...] Part 5: Rendering a Buddy List [...] [...] Part 5: Rendering a Buddy List [...] [...] Creating an Application in Kivy: Part 3 Creating an Application in Kivy: Part 5 [...] [...] Part 5: Rendering a Buddy List [...] [...] Part 5: Rendering a buddy list [...] [...] Creating an Application in Kivy: Part 5 Creating an Application in Kivy: Part 7 [...] [...] Part 5: Rendering a buddy list [...] hi, Dusty! thanks very much for you work, it’s helpful on me. but there is a error on my ubuntu. It’s take whole day from me. WARNING:kivy:stderr: Traceback (most recent call last): WARNING:kivy:stderr: File “__main__.py”, line 136, in WARNING:kivy:stderr: Orkiv().run() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/app.py”, line 600, in run WARNING:kivy:stderr: runTouchApp() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/base.py”, line 454, in runTouchApp WARNING:kivy:stderr: EventLoop.window.mainloop() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/core/window/window_pygame.py”, line 325, in mainloop WARNING:kivy:stderr: self._mainloop() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/core/window/window_pygame.py”, line 231, in _mainloop WARNING:kivy:stderr: EventLoop.idle() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/base.py”, line 294, in idle WARNING:kivy:stderr: Clock.tick() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/clock.py”, line 370, in tick WARNING:kivy:stderr: self._process_events() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/clock.py”, line 481, in _process_events WARNING:kivy:stderr: if event.tick(self._last_tick) is False: WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/clock.py”, line 280, in tick WARNING:kivy:stderr: ret = callback(self._dt) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/animation.py”, line 304, in _update WARNING:kivy:stderr: self.stop(widget) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/animation.py”, line 188, in stop WARNING:kivy:stderr: self.dispatch(‘on_complete’, widget) WARNING:kivy:stderr: File “_event.pyx”, line 281, in kivy._event.EventDispatcher.dispatch (/home/peng/code/orkiv/venv/build/kivy/kivy/_event.c:4152) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/uix/modalview.py”, line 184, in WARNING:kivy:stderr: a.bind(on_complete=lambda *x: self.dispatch(‘on_open’)) WARNING:kivy:stderr: File “_event.pyx”, line 285, in kivy._event.EventDispatcher.dispatch (/home/peng/code/orkiv/venv/build/kivy/kivy/_event.c:4203) WARNING:kivy:stderr: File “__main__.py”, line 78, in connect_to_jabber WARNING:kivy:stderr: app.root.show_buddy_list() WARNING:kivy:stderr: File “__main__.py”, line 59, in show_buddy_list WARNING:kivy:stderr: self.buddy_list = BuddyList() WARNING:kivy:stderr: File “__main__.py”, line 30, in __init__ WARNING:kivy:stderr: super(BuddyList,self).__init__() WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/uix/boxlayout.py”, line 102, in __init__ WARNING:kivy:stderr: super(BoxLayout, self).__init__(**kwargs) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/uix/layout.py”, line 61, in __init__ WARNING:kivy:stderr: super(Layout, self).__init__(**kwargs) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/uix/widget.py”, line 163, in __init__ WARNING:kivy:stderr: Builder.apply(self) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/lang.py”, line 1429, in apply WARNING:kivy:stderr: self._apply_rule(widget, rule, rule) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/lang.py”, line 1556, in _apply_rule WARNING:kivy:stderr: value, rule, rctx['ids']) WARNING:kivy:stderr: File “/home/peng/code/orkiv/venv/local/lib/python2.7/site-packages/kivy/lang.py”, line 1221, in create_handler WARNING:kivy:stderr: f.bind(**{k[-1]: fn}) WARNING:kivy:stderr: TypeError: descriptor ‘bind’ of ‘kivy._event.EventDispatcher’ object needs an argument hey guys, I try to run this program out of the “venv” env and I got the correct result. but I don’t know why, maybe my kivy env is wrong.
http://archlinux.me/dusty/2013/07/16/creating-an-application-in-kivy-part-5/
CC-MAIN-2014-10
refinedweb
4,018
57.98
Have you ever been stumped by a difficult XML transformation issue that you couldn't solve with XSLT (Extensible Stylesheet Language Transformation) alone? Take, for example, a simple filter stylesheet that selects only those . XSLT weaknesses. XSLT's main weakness is text processing, which seems reasonable since its purpose is to render XML. However, because XML content is entirely text, XSLT needs stronger text handling. Needless to say, stylesheet designers require some extensibility from time to time. With Xalan, Java provides this extensibility. Use JDK classes within XSLT You might be pleased to know that you don't have to write any Java code to take advantage of Xalan's extensibility. When you use Xalan, you can create and invoke methods on almost any Java object. Before using a Java class, you must provide an XSLT namespace for it. This example declares "java" as a namespace for everything in or under the Java package (i.e., the entire JDK): <xsl:stylesheet Now we need something to do. Let's start with a small XML document: <article> <title>Java May Be a Fad</title> <author>J. Burke</author> <date>11/30/97</date> </article> You've been asked to style this XML so the title appears in uppercase. A developer new to XSLT would simply pop open an XSLT reference to look for the toUpper() function; however, she'd be disappointed to find that the reference lacks one. The translate() method is your best bet, but I have an even better method: java.lang.String.toUpperCase(). To use this method, you need to instantiate a String object with the title contents. Here is how you can create a new String instance with the title element's contents: <xsl:template <xsl:variable The name attribute specifies the handle to your new String instance. You invoke the constructor by first specifying the namespace along with the remaining path to the String class. As you might have noticed, String lacks a new() method. You use new() to construct a Java object in Xalan; it corresponds to Java's new keyword. The arguments given to new() determine the constructor version that will be called. Now that you have the title contents within a Java String object, you can use the toUpperCase() method, like so: <xsl:value-of This might look strange to you at first. When using Java methods on a particular instance, the first argument is the instance you want the method invoked on. Obviously Xalan uses introspection to provide this capability. Below you'll find another trick. Here is how you might emit the date and time anywhere within your stylesheet using java.lang.Date: <xsl:value-of Here's something that will make the day of anyone required to localize a generic stylesheet between two or more languages. You can use java.util.ResourceBundle to localize literal text within a stylesheet. Since your XML has an author tag, you might want to print "Author:" next to the person's name. One option is to create a separate stylesheet for each locale, i.e., one for English, another for Chinese, and so on. The problems inherent in this approach should be evident. Keeping multiple stylesheet versions consistent is time consuming. You also need to modify your application so that it chooses the correct stylesheet based on the user's locale. Instead of duplicating the stylesheet for each language, you can take advantage of Java's localization features. Localizing with the help of a ResourceBundle proves a better approach. Within XSLT, load the ResourceBundle at the beginning of your stylesheets, like so: <xsl:variable The ResourceBundle class expects to find a file called General.properties in your CLASSPATH. Once the bundle is created, it can be reused throughout the stylesheet. This example retrieves the author resource: <xsl:value-of Notice again the strange method signature. Normally, ResourceBundle.getString() takes only one argument; however, within XSLT you need to also specify the object by which you want to invoke the method. Write your own extensions For some rare situations, you might need to write your own XSLT extension, in the form of either an extension function or an extension element. I will discuss creating an extension function, a concept fairly easy to grasp. Any Xalan extension function can take strings as input and return strings to the XSLT processor. Your extensions can also take NodeLists or Nodes as arguments and return these types to the XSLT processor. Using Nodes or NodeLists means you can add to the original XML document with an extension function, which is what we will do. One type of text item encountered frequently is a date; it provides a great opportunity for a new XSLT extension. Our task is to style an article element so the date prints in the following format: Can standard XSLT complete the date above? XSLT can finish most of the task. Determining the actual day is the difficult part. One way to quickly solve that problem is to use the java.text.SimpleDate format class within an extension function to return a string formatted as we wish. But wait: notice that the day appears in bold text. This returns us to the initial problem. The reason we are even considering an extension function is because the original XML document failed to structure the date as a group of nodes. If our extension function returns a string, we will still find it difficult to style the day field differently than the rest of the date string. Here's a more useful format, at least from the perspective of an XSLT designer: <date> <month>11</month> <day>30</day> <year>2001</year> </date> We now create an XSLT extension function, taking a string as an argument and returning an XML node in this format: <formatted-date> <month>November</month> <day>30</day> <day-of-week>Friday</day-of-week> <year>2001</year> </formatted-date> The class hosting our extension function doesn't implement or extend anything; we will call the class DateFormatter: public class DateFormatter { public static Node format (String date) {} Wow, too easy, huh? There are absolutely no requirements placed on the type or interface of a Xalan extension function. Generally, most extension functions will take a String as an argument and return another String. Other common patterns are to send or receive org.w3c.dom.NodeLists or individual Nodes from an extension function, as we will do. See the Xalan documentation for details on how Java types convert to XSLT types. In the code fragment above, the format() method's logic breaks into two parts. First, we need to parse the date string from the original XML document. Then we use some DOM programming techniques to create a Node and return it to the XSLT processor. The body of our format() method implementation reads: Document doc = DocumentBuilderFactory.newInstance(). newDocumentBuilder().newDocument(); Element dateNode = doc.createElement("formatted-date"); SimpleDateFormat df = (SimpleDateFormat) DateFormat.getDateInstance(DateFormat.SHORT, locale); df.setLenient(true); Date d = df.parse(date); df.applyPattern("MMMM"); addChild(dateNode, "month", df.format(d)); df.applyPattern("EEEE"); addChild(dateNode, "day-of-week", df.format(d)); df.applyPattern("yyyy"); dateNode.setAttribute("year", df.format(d)); return dateNode; dateNode will contain our formatted date values that we return to the stylesheet. Notice that we've utilized java.text.SimpleDateFormat() to parse the date. This allows us to take full advantage of Java's date support, including its localization features. SimpleDateFormat handles the numeric date conversion and returns month and day names that match the locale of the VM running our application. Remember: the primary purpose of an extension function is simply to allow us access to existing Java functionality; write as little code as possible. An extension function, like any Java method, can use other methods within the same class. To simplify the format() implementation, I moved repetitive code into a small utility method: private void addChild (Node parent, String name, String text) { Element child = parent.getOwnerDocument().createElement(name); child.appendChild(parent.getOwnerDocument().createTextNode(text)); parent.appendChild(child); } Use DateFormatter within a stylesheet Now that we have implemented an extension function, we can call it from within a stylesheet. Just as before, we need to declare a namespace for our extension function: <xsl:stylesheet This time, we fully qualified the path to the class hosting the extension function. This is optional and depends on whether you'll be using other classes within the same package or just a single extension object. You can declare the full CLASSPATH as the namespace or use a package and specify the class where the extension function is invoked. By specifying the full CLASSPATH, we type less when we call the function. To utilize the function, simply call it from within a select tag, like so: <xsl:template <xsl:apply-templates </xsl:template>
https://www.javaworld.com/article/2075947/core-java/xslt-blooms-with-java.html
CC-MAIN-2018-09
refinedweb
1,467
55.74
By Randal T. (Intel), Added Starting with the Intel® AMT SDK version 7.0 and higher, it’s becoming much easier to access Intel® AMT Technology using Windows. If you’re new to PowerShell, a reusable command is called a cmdlet (Pronounced “Command let”). You can download the initial release of the module here, also there is a version included the latest Intel® AMT 7.0 SDK and if you have noticed the SDK now provides code snippets written in PowerShell. Moreover, there will be a new release of the module at the end of March/2011 that will have a host of new features including PowerShell Drive support that what will allow you manage nearly every Intel® AMT firmware setting in PowerShell using simple get-Item set-Item cmdlets. However, the primary purpose of this blog is not to go over each individual command offered in the Module but instead focus on how you can integrate the PowerShell commands into a management console GUI. After Installing the Intel module, users can open a command prompt, then import the PowerShell module, and start running Intel® AMT commands from the shell environment. However, what if you wanted to provide a GUI application for your users? Would such a GUI application need to be written in PowerShell ? The answer is no. In fact, the GUI can be written in any language that’s capable of launching the PowerShell.exe and passing in the command arguments. However, if you’re GUI application happens to be a .NET application it can use the PowerShell Module more efficiently and I’ll demonstrate this in the blog. Let’s start with a simple sample .NET application that will demonstrate calling the PowerShell Module command” Invoke-AMTPowerManagement”. What we want to do this have a GUI menu item that will invoke call Invoke-AMTPowerManagment when it’s clicked. That’s it pretty simple right? After that it would be fairly simple using the same techniques to write a full featured GUI all built on top of the reusable PowerShell commands. The first then we need to do is create a PowerShell Runspace in .NET. The PowerShell runspace allows us to run our Module commands directly in our .NET application (no need to spawn Powshell.exe or anything like that) // add reference C:\Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0\ //System.Management.Automation.dll // If no such file exists on your system then download and Install the PowerShell 2.0 SDK from microsoft.com using System.Management.Automation; using System.Management.Automation.Runspaces; using System.Collections.ObjectModel; // Create the default initial session state InitialSessionState iss = InitialSessionState.CreateDefault(); // import the Intel Module into the runspace iss.ImportPSModule(new string[] { "IntelvPro" }); Runspace runSpace = RunspaceFactory.CreateRunspace(iss); runSpace.Open(); _ps = PowerShell.Create(); _ps.Runspace = runSpace; Next we will use the PowerShell object to define the Module command that we want to run. // Add the cmdlet and specify the runspace. _ps.AddCommand("Invoke-AmtPowerManagement"); _ps.AddParameter("Operation","PowerOn");this could also be poweroff Finally we will specify the target computer and credentials that will be used when performing the command. _ps.AddParameter("ComputerName",textBox1.Text); //Note:TextBox1.Text holds the computerName in our simple GUI _ps.AddParameter("Port", "16992"); // this can be changed to 16993 for TLS //PowerShell uses secure strings for credentials System.Security.SecureString secString = new System.Security.SecureString(); foreach (char passwordChar in textBox3.Text) secString.AppendChar(passwordChar); // Note: Note:TextBox3.Text holds digest password in our simple GUI PSCredential amtCred = new PSCredential(textBox2.Text,secString); // Note: Note:TextBox2.Text holds digest user in our simple GUI _ps.AddParameter("Credential", amtCred); Collection results = _ps.Invoke(); Note: the PowerShell module can support Mutual TLS and Kerberos credentials using the currently logged on user. However, for simplicity, this short sample assumes your Intel AMT device is configured using HTTP/Digest authentication. If you want to know how to invoke commands using HTTPS/Kerberos authentication just let me know and I’ll show you how to do that, it’s pretty easy. That’s it! We can now securely leverage all the commands available in the PowerShell Module from our .NET GUI application. You can also write a PowerShell script that performs an entire sequence of AMT commands and just use the runspace to run the script as well. If you’re into PowerShell (or are planning on being into PowerShell) the Intel module just made things a lot easier. You can download the full DevStudio 2008 project source for this example here Add a CommentTop (For technical discussions visit our developer forums. For site or software product issues contact support.)Please sign in to add a comment. Not a member? Join today
https://software.intel.com/en-us/blogs/2011/03/09/integrating-intel-active-management-technology-intel-amt-by-taking-advantage-of-the-windows-powershell-module-for-intel-vpro
CC-MAIN-2017-09
refinedweb
782
50.53
Not sure if this is your problem, but since methods don't have namespaces, I'm not sure why your code could not be reduced to: <dtml-if <dtml-var index.html> <dtml-else> <dtml-var chunk_editFrameset> </dtml-if> The namespace of index_html right now is 'edit' (or another folder that acquires it) ...and... if the namespace of index_html is 'edit' than the namespace of index.html will also be, and ergo, the namespace of chunk_dspPrimaryCol will also be edit (index.html doesn't have a method unto itself), changing your code to: <dtml-if <dtml-var chunk_dspPrimaryColPublic> <dtml-else> <dtml-var chunk_dspPrimaryColEdit> </dtml-if> Just keep in mind, that any object with a namespace (folders, documents, certain class instances) that is in the acquisition path can have that namespace explicitly invokes with <dtml-with name_of_object>. This only works for object with a namespace; methods are not acquirable objects, and though you might call them objects (in a loose sense) they are not objects in a Zopish sense. I don't know if my sugestion will work for you, but hopefully it will help. Sean ========================= Sean Upton Senior Programmer/Analyst SignOnSanDiego.com The San Diego Union-Tribune 619.718.5241 [EMAIL PROTECTED] ========================= -----Original Message----- From: Geoffrey L. Wright [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 03, 2001 3:16 PM To: [EMAIL PROTECTED] Subject: [Zope] Poor Procedural Programmer Needs OOPish Enlightenment or... "A Cry for Namespace Help" I seem to have run into one of those Zope namespace issued that's starting to make me dizzy. I have a index_html method that displays content conditionally depending on where it's called. It looks like this: <dtml-if <dtml-var index.html> <dtml-else> <dtml-var chunk_editFrameset> </dtml-if> The chunk_editFrameset method also displays the same index.html file in one of two frames. This works like a champ. If I'm in a directory called edit when this is called, it displays the frameset. Otherwise, it displays the index.html directly. The index.html method is contains a bunch 'o static html plus another method called chunk_dspPrimaryCol that also conditionally displays information based on where it's called. chunk_dspPrimaryCol looks like this: <dtml-if <dtml-var chunk_dspPrimaryColPublic> <dtml-else> <dtml-var chunk_dspPrimaryColEdit> </dtml-if> This doesn't work like I'd hoped, since I have to move back up the namespace stack to find index.html, and by the time I do I'm no longer in the edit folder. So I _always_ end up displaying the chunk_dspPrimaryColPublic method, even if I call the index_html method from within the edit folder. What I need (I think) is a way to keep all of this activity in the edit Folder. Or I need some other way of solving this problem. Any thoughts? I hope my description was clear enough... -- Geoffrey L. Wright Developer / Systems Administrator (907) 563-2721 ex. 4900 _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) RE: [Zope] Poor Procedural Programmer Needs OOPish Enlightenment sean . upton Wed, 03 Jan 2001 18:50:57 -0800 - [Zope] Poor Procedural Programmer Needs OOPish Enlighte... Geoffrey L. Wright - Re: [Zope] Poor Procedural Programmer Needs OOPish... sean . upton - Re: [Zope] Poor Procedural Programmer Needs OO... Geoffrey L. Wright
https://www.mail-archive.com/zope@zope.org/msg13201.html
CC-MAIN-2017-04
refinedweb
551
56.05
#include <hallo.h> Wichert Akkerman wrote on Fri May 31, 2002 um 01:41:37PM: > What has the BTS got to do with this? Policy is very clear on this > point: > > If a package needs any special device files that are not included in > the base system, it must call `MAKEDEV' in the `postinst' script, > after asking the user for permission to do so. If you wanna enforce this rule, you have to take makedev away from it's maintainer or allow anyone NMU it. Otherwise, Bdale has all the power to make packaging more complicated for maintainers of such packages.
https://lists.debian.org/debian-devel/2002/05/msg03126.html
CC-MAIN-2015-14
refinedweb
101
72.19
If to run following code using System; using System.Linq.Expressions; using System.Diagnostics; public class E { public double V { get; set; } } public class Program { public static void Main() { E e = new E(); Func<double> f = () => e.V; Expression expr = Expression.Property(Expression.Constant(e), "V"); Expression<Func<double>> exp = Expression.Lambda<Func<double>>(expr); Func<double> ef = exp.Compile(); e.V = 123; int attempts = 5; for (int j = 0; j < attempts; j++) { int c = 100000; double[] r1 = new double[c]; Stopwatch sw = new Stopwatch(); sw.Start(); for (int i = 0; i < c; i++) { r1[i] = f(); } sw.Stop(); double[] r2 = new double[c]; Stopwatch sw2 = new Stopwatch(); sw2.Start(); for (int i = 0; i < c; i++) { r2[i] = ef(); } sw2.Stop(); double rat = (double)sw.ElapsedTicks / sw2.ElapsedTicks; Console.WriteLine(rat); } } } then it turns out that compiled expression is much slower than just a lambda. Is it expected result? Is it possible to rewrite to expression somehow to get equivalent code but which will work faster? your delegate f is created with a compiled generated class with a field e of type E and access value like this : return <Target>.e.V; In the second case (expression), delegate is created using constant instruction that use a Closure as target with an array of object where e is the first element. Code can be represented like this : return ((E)<Target>.Constants[0]).V; That's why performance is better for first case. note : with "Watch window" in Visual Studio, when you debug the code, you can inspect "f.Target" and "ef.Target" to confirm it.
https://expressiontree-tutorial.net/knowledge-base/42751759/compiled-expression-tree-performance
CC-MAIN-2022-40
refinedweb
265
60.61
What Variable should I use to store text Printable View What Variable should I use to store text Look at C-style string (char*'s) or, the preferable std::string class in <string> (C++ Standard Library). Don't worry, the link Stack Overflow provided is a good reference, but the string class is much easier to use than reading that page would suggest. I don't know of any specific tutorials on string that would be easier to follow, but I'm sure there are some. thanks everyone Oh, I must admit it is a good reference compared to learning about it. Here are some tutorials that you may find helpful: - C++ String Class Tutorial - C++ Input and Output - Manipulating String Variables - C++ Standard Template Library - Stack Overflow Nice, thank you. Upon quick skimming I like the fourth one best. Yeah but the third one didn't exactly give me the warm fuzzies: That using namespace std; line doesn't have anything to do with the string.h header does it? Maybe string but not string.h. Also, I wouldn't say that simply including a header lets you "recognize" the string library.That using namespace std; line doesn't have anything to do with the string.h header does it? Maybe string but not string.h. Also, I wouldn't say that simply including a header lets you "recognize" the string library.Quote: A string is a sequence of characters. In order to use strings in your programs, you need to first make sure that you are using a recognized string library. This can be achieved by adding the following to the top of your code (where you place your header files): #include<string.h> using namespace std; The second line tells the compiler to use the standard "namespace" for the string library. Quote: The rest is fairly simple. Here is how a sample program using strings would look like: #include <iostream> #include <string> using namespace std; void main() // Yuck { string firstname="test"; cout << "Enter first name: "; cin >> firstname; cout << firstname << endl; } Some tutorials don't give accurate information. It is hard to find tutorials on <string>, so I just posted all the information I had. Sadly, some people still use incorrect syntax and other idio's like void main() when they shouldn't. A post I made a while back shows why it isn't good to use these types of coding practices. - Stack Overflow
http://cboard.cprogramming.com/cplusplus-programming/58742-what-variable-printable-thread.html
CC-MAIN-2015-32
refinedweb
407
71.44
# C2x: the future C standard ![image](https://habrastorage.org/r/w1560/webt/7j/dw/5r/7jdw5r9gajn2olyfuypvvwskuve.png) > I strain to make the far-off echo yield > > A cue to the events that may come in my day. > > (‘Doctor Zhivago’, Boris Pasternak) I’ll be honest: I don’t write in pure C that often anymore and I haven’t been following the language’s development for a long time. However, two unexpected things happened recently: С [won back](https://www.tiobe.com/tiobe-index/) the title of the most popular programming language according to TIOBE, and the first truly [interesting](https://nostarch.com/Effective_C) book in years on this language was published. So, I decided to spend a few evenings studying material on C2x, the future version of C. Here I will share with you what I consider to be its most interesting new features. The Committee, the Standard and all that ======================================== I am sure most of you know how C is developed, but, for anyone who doesn’t know let me first explain the terminology, and will then briefly retell the story of the language. In 1989 the already extremely popular programming language, C, reached new heights of recognition, becoming both the American national ([ANSI](https://en.wikipedia.org/wiki/American_National_Standards_Institute)) and the international ([ISO](https://en.wikipedia.org/wiki/International_Organization_for_Standardization)) standard. This version of C was known as C89, or ANSI C to differentiate it from the numerous semi-compatible dialects that had existed previously. A new version of the language standard is released approximately once every ten years. At the present time there are four versions in existence: the original C89, C99, C11 and C18. It is not known when the next version will be published, so the version currently being worked on is referred to as C2x. Changes to the standard are made by a special group, the so-called WG14. It comprises interested representatives of the industry from various countries. In the specialist English-language literature this group is often referred to as the ‘Committee’, so that is what I am going to call it here as well. The Committee receives proposals from those involved, each proposal being given a designation(e.g. [N2353](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2353.htm)). Proposals usually include: the reason for introducing changes, references to other documents, and specific changes to the Standard. Proposals can come in several versions, each of which is given a unique designation. Returning to our topic for a moment, I have split this article into three parts, and have ordered them according to the likelihood of the relevant changes being made to the standard. The three parts are as follows: 1. Proposals the Committee has already accepted. 2. Proposals received positively but returned to the authors for revision. 3. What I consider to be the ‘juiciest’ proposals: rumoured unpublished proposals, being discussed behind the scenes by members of the Committee. Proposals accepted by the Committee =================================== ### strdup and strndup functions I may appear ignorant when I say that I wasn’t aware these functions weren’t in the standard C library. What could be more obvious and simpler than copying strings? But no, C isn’t like that. C doesn’t like its users. So, 20 years later, we are [getting](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2353.htm) the `strdup` and `strndup` functions! ``` #include char \*strdup (const char \*s); char \*strndup (const char \*s, size\_t size); ``` It is nice to know that even the Committee accepts the inevitable. ### Attributes Developers of major C compilers have a favourite game they play: coming up with extensions to the language most often expressed through attributes of declarations and definitions. The language itself, of course, does not provide any special syntax for such things, so each person needs to do what they can to be creative. In order to – somehow – to sort out this mess without coming up with dozens of new keywords, the Committee thought up a syntax-to-rule-them-all. In a nutshell, a standard [syntax](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2335.pdf) for specifying attributes will be approved as part of the next version. Here is an example from the proposal: ``` [[attr1]] struct [[attr2]] S { } [[attr3]] s1 [[attr4]], s2 [[attr5]]; ``` Here, `attr1` relates to `s1` and `=s2=`; `attr2` relates to the `struct S` definition; `attr3` relates to the `struct s1` type; `attr4` to the `s1` identifier; and `attr5` to the `s2` identifier. The Committee has already voted to include the attributes in the standard, but there is still a long time to wait before the updated version of the standard is published. Nevertheless, proposal authors are already playing with their new toy. Here are some of the proposed attributes: 1. The `deprecated` attribute allows you to mark a declaration as obsolete, which allows compilers to issue appropriate warnings. 2. The `fallthrough` attribute can be used to explicitly mark the places in the switch case branches, where the control flow is supposed to cross case boundaries. 3. Using the `nodiscard` attribute you can explicitly specify that a value returned by the function needs to be processed. 4. Where a variable or function is not used deliberately, you can mark it with the `maybe_unused` attribute (instead of the idiomatic `(void) unused_var`). 5. A function not returning to the call location can be marked with the `noreturn` attribute. ### Old-school function parameter declaration style (K&R) ‘K&R declaration’ (read “when types are specified after the brackets” or, “I don’t understand old code in C”) is a form of function parameter declaration that was already out-of-date way back in 1989. It is finally going to be [burnt with fire](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2432.pdf). In other words, you won’t be allowed to do this anymore: ``` long maxl (a, b) long a, b; { return a > b ? a: b; } ``` Enlightenment has finally come to code in C! Function declarations will at last actually do what people expect them to: ``` /* function declaration without arguments */ int no_args(); /* also function declaration without arguments */ int no_args(void); ``` ### Signed integer representation What has felt like an endless saga is nearing completion, it would seem. The Committee has [come to terms](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2412.pdf) with the fact that there are no such things as unicorns or mythical architectures, and programmers in C are dealing with [Two’s complement](https://en.wikipedia.org/wiki/Two's_complement) signed integer representation. In its present form this clarification simplifies the standard a little, but in future it should make it possible to get rid of the language’s favourite undefined behaviour. Proposals being worked on ========================= While it can be said that the changes listed above already exist in our reality, the following group of proposals is still being developed. Nevertheless, the Committee has given them provisional approval and, assuming the authors show due diligence, they should definitely be accepted. ### Anonymous function parameters I regularly write 1-2 trial programs in C a week. And, quite honestly, I have long grown tired of having to specify the names of unused arguments. Implementing [one](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2510.pdf) of the proposals positively assessed by the Committee would mean that we wouldn’t have to keep specifying the names of parameters in function definitions: ``` int main(int, char *[]) { /* No hassle! */ return 0; } ``` It’s a small thing – but welcome! ### The old new keywords After a very loooong transition period the Committee, finally, decided to accept, erm, [‘new’](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2458.pdf) [keywords](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2457.pdf) into the language: `true`, `false`, `alignas`, `alignof`, `bool`, `static_assert` and others. It will finally be possible to drop headers like . ### Including binary files in the source file The [option](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2499.pdf) of including binary data from files in the executable file is something all game developers are going to find unbelievably useful: ``` const int music[] = { #embed int "music.wav" }; ``` It’s my belief that the Committee has realises that the community knows where their next meeting is being held, and that this preprocessor directive will be accepted without questions. ### Farewell, NULL – or nullptr ready on the starting blocks It would seem that the problematic `NULL` macros are being [replaced](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2394.pdf) with the keyword `nullptr`, which will be equivalent to the expression `((void*)0)` and, in the case of type conversion, will have to remain a pointer type. Any use of NULL should be accompanied with a compiler warning: ``` /* I always forget why the cast is necessary. */ int execl(path, arg1, arg2, (char *) NULL); /* But happiness is just round the corner */ int execl(path, arg1, arg2, nullptr); ``` If this example make no sense to you, then take a look at the Linux documentation under `man 3 exec` and you will find your enlightenment there. ### Reform of error processing in the standard library The processing of standard library function errors has been a longstanding problem in C. The combination of unfortunate solutions in various versions of the standard, the conservative stance of the Committee and reverse compatibility issues have all got in the way of finding a solution that suits everyone. And here, finally, is someone prepared to [propose](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2429.pdf) a solution for compiler developers, the super-conservative Committee and for us mere mortals: ``` [[ oob_return_errno ]] int myabs (int x) { if(x == INT_MIN ) { oob_return_errno ( ERANGE , INT_MIN ) ; } return (x < 0) ? -x : x; } ``` Let me draw your attention to the `oob_return_errno` attribute. This means that the following functions will be generated from this template function: 1. A function returning the structure with an error flag and the result of the work of the (`struct {T return_value; int exception_code}`) function. 2. A function returning the result of the work of the function, and ignoring possible errors in the arguments, leading to undefined behaviour. 3. A function terminating execution in the case of an error in the arguments. 4. A function replacing errno, that is, exhibiting ordinary behaviour. The compiler is offered a choice between these options, depending on how the programmer uses a given function: ``` bool flag; int result = oob_capture(&flag , myabs , input) ; if (flag) { abort (); ``` In this case, if the function has been carried out properly, this is shown with a flag, while errno is not affected. Function calls saving the error code to the variable, for example, look similar. The actual syntax, it would seem, will yet change, but it is a good thing that the Committee is at least *thinking* in this direction. Rumours ======= The author of “Effective C”, along with other Committee members, [answered](https://news.ycombinator.com/item?id=22865357) questions from members of the Hacker News community. Lots of things overlap with what we have noted above. But there are a couple of points which are important for programmers. These have not been formulated as proposals, as such, however Committee members are hinting that work might be underway in these areas. ### typeof operator The `typeof` keyword was [implemented](https://gcc.gnu.org/onlinedocs/gcc/Typeof.html#Typeof) a long time ago in compilers and makes it easier to write correct macros. Here is a textbook example: ``` #define max(a,b) \ ({ typeof (a) _a = (a); \ typeof (b) _b = (b); \ _a > _b ? _a : _b; }) ``` Martin Sebor, a developer from Red Hat and a Committee member, maintains that a relevant proposal is already being worked on and will very likely be approved. Keeping my fingers crossed! ### defer operator Some programming languages, including ones implemented by Clang and GCC, allow you to bind freed-up resources to the lexical scoping of variables or, to put it more simply, to call given code when the control flow goes outside the scope of the variable. Pure C doesn’t have this option nor ever has, but compilers have been implementing the `cleanup()` attribute for a long time: ``` int main(void) { __attribute__((cleanup(my_cleanup_function))) char *s = malloc(sizeof(*s)); return 0; } ``` Robert Seacord author of “Effective C” and member of the Committee has [admitted](https://news.ycombinator.com/item?id=22866311) that he is working on a proposal along the lines of the keyword `defer` from Go: ``` int do_something(void) { FILE *file1, *file2; object_t *obj; file1 = fopen("a_file", "w"); if (file1 == NULL) { return -1; } defer(fclose, file1); file2 = fopen("another_file", "w"); if (file2 == NULL) { return -1; } defer(fclose, file2); /* ... */ return 0; } ``` In this example, the `fclose` function will be called with the `file1` and `file2` arguments, in any case where the program goes outside the body of the `do_something` function. Vive la révolution! Conclusions =========== Changes to C are like genetic mutations: they don’t happen often, rarely are viable, but, in the end, they push the evolution forward. The most recent unfortunate changes to C occurred ten years ago. And the most recent quality leap forward in terms of development of the language happened over 20 years ago. And, by all accounts, the members of the Committee have now decided to consider moving forwards in respect of the new iteration of the standard. So, to conclude: use static analysers, run Valgrind as often as possible and try not to write overly-big programs in C! **PS** I think the “first truly interesting book” thing was an overstatement on my part. Someone recommended a book entitled [‘Modern C’](https://www.manning.com/books/modern-c) written by a member of the committee, and that would definitely be worth a read.
https://habr.com/ru/post/512802/
null
null
2,297
52.29
Provided by: manpages-dev_4.04-2_all NAME setgid - set group identity SYNOPSIS #include <sys/types.h> #include <unistd.h> int setgid(gid_t gid); DESCRIPTION setgid() sets the effective group ID of the calling process. If the caller is privileged (has the CAP_SETGID capability),. RETURN VALUE On success, zero is returned. On error, -1 is returned, and errno is set appropriately. ERRORS. CONFORMING TO POSIX.1-2001, POSIX.1-2008, SVr4. NOTES). SEE ALSO getgid(2), setegid(2), setregid(2), capabilities(7), credentials(7), user_namespaces(7) COLOPHON This page is part of release 4.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
http://manpages.ubuntu.com/manpages/xenial/en/man2/setgid32.2.html
CC-MAIN-2018-39
refinedweb
120
53.68
Confidence interval on an average Posted February 10, 2013 at 09:00 AM | categories: statistics | tags: | View Comments Updated April 09, 2013 at 08:54 AM nil has a statistical package available for getting statistical distributions. This is useful for computing confidence intervals using the student-t tables. Here is an example of computing a 95% confidence interval on an average. import numpy as np from scipy.stats.distributions import t n = 10 # number of measurements dof = n - 1 # degrees of freedom avg_x = 16.1 # average measurement std_x = 0.01 # standard deviation of measurements # Find 95% prediction interval for next measurement alpha = 1.0 - 0.95 pred_interval = t.ppf(1-alpha/2.0, dof) * std_x / np.sqrt(n) s = ['We are 95% confident the next measurement', ' will be between {0:1.3f} and {1:1.3f}'] print ''.join(s).format(avg_x - pred_interval, avg_x + pred_interval) We are 95% confident the next measurement will be between 16.093 and 16.107 Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/02/10/Confidence-interval-on-an-average/
CC-MAIN-2017-39
refinedweb
172
53.07
This article will give you all the necessary information about the Serial Peripheral Interface (SPI) communication protocol of the AVR microcontroller used in Arduino UNO and Arduino Mega board. I have included a detailed specification, pin diagram, and code for SPI communication between two Arduino boards. I have also included Arduino SPI read example with the RFID-RC522 reader. After this article, you will learn how to use the SPI protocol and read/write data via the SPI protocol. Supplies Hardware components Software Makerguides.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to products on Amazon.com. -> Learn more about How Easy Is It To Learn Arduino here. Basic of SPI Communication Protocol Motorola founded SPI (Serial Peripheral Interface) interface in 1970. SPI is a synchronous serial communication and full-duplex protocol. This means that data can be transferred in both directions at the same time simultaneously. It can also be set for half-duplex mode. What is SPI? SPI is a synchronous serial communication protocol to transfer the data with rising or falling edge of clock pulse between two microcontrollers or between the microcontroller and SPI peripheral devices. SPI device can be set as Master or Slave, and only the Master can generate clock pulses and initiate the communication. However, once the communication begins, both Master and Slave can transmit the data simultaneously. It is also known as the four-wire protocol. SPI communication uses four wires MISO, MOSI, CLK, and CS/SS. An SPI can have only one Master and can have multiple slaves. A Master is usually a microcontroller, and the slaves can be a microcontroller or any peripherals such as sensors, ADC, DAC, LCD, RTC, etc. SPI Master-Slave Interfacing SPI protocol contains four lines MISO, MOSI, SCK, and CS/SS. MOSI (Master Out Slave In) – Using MOSI pin Master sends data to Slave. MISO (Master In Slave Out) – Using MISO Slave can send data to the Master.SCK (Serial Clock) – The Master generates the clock signal, and it provides synchronization between Master and Slave. CS (Chip Select) – Using CS or Slave Select (SS), Master can select Slave. This is an active-low pin, so you need to set this pin low for selecting the Slave device. SPI Master with Multiple Slaves SPI protocol allows you to have multiple SPI devices sharing the same MOSI, MISO, and CLK lines of the Master. As you can observe in the above diagram, there are three slaves in which the MOSI, MISO, SCK are commonly connected to the Master, and the CS of each slave are connected separately to individual CS pins of the master. Does Arduino Uno have SPI? Yes, the Arduino Uno board comes with an AVR ATMEGA328p microcontroller, and it has one SPI interface on the board. So, to use SPI in Arduino Uno and set different modes and speeds, you need to learn about the SPI register in AVR MCUs. What are the SPI Registers in AVR? AVR Series uses three registers to configure SPI communication that is SPI Control Register (SPCR), SPI Status Register (SPSR), and SPI Data Register (SPDR). SPI Control Register (SPCR) The default value of the SPCR Register is 0x00. Bit 7 is SPIE, SPI Interrupt Enable bit. You can enable SPI Interrupt by setting the bit high and disabling it by setting the bit low. Bit 6 is SPE, SPI Enable bit. You need to set this bit high to enable the SPI. Bit 5 is DORD, Data Order bit. If the bit is set, then it transmits LSB first. If the bit is clear, then MSB is transmitted first. Bit 4 is MSTR, Master/Slave Select bit. Set for Master mode and clear the bit for Slave mode. Bit 3 is CPOL, Clock Polarity Select bit. If set, clocks start from logic one, and if clear, the clock starts from logic zero. Bit 2 is CPHA, Clock Phase Select bit.The Clock Polarity (CPOL) and Clock Phase (CPHA) bits define how serial data is transferred between the Master and the Slave. If the bit is set, then the data sample is on the trailing clock edge, and if it is clear, then the data sample is on the leading clock edge. Bit 1:0 are SPR1-SPR0 SPI Clock Rate Select bits. The below table shows the SCK clock frequency select bit settings. SPI Status Register (SPSR) Bit 7 – SPIF: SPI interrupt flag bit. When the serial data transfer is complete, this flag gets set. It also gets set when the SS pin is driven low in Master mode. Bit 6 – WCOL: Write Collision Flag bit. This bit is set when SPI data register writes occur during previous data transfer. Bit 5:1 – Reserved Bits. These bits are reserved bits, and they will always read as zero. Bit 0 – SPI2X: Double SPI Speed bit. If SPI speed (SCK Frequency) gets doubled when set this bit. SPI Data Register (SPDR) The SPI Data Register is a read/write register. It is used for data transfer between the Register File and the SPI Shift Register. Writing to the register initiates data transmission. Reading the register causes the Shift Register Receive buffer to be read. What is SPI in Arduino? Suppose you use Arduino Uno or Arduino Mega and want to transfer data serially between two SPI devices. In that case, this part of the article will help you get started with SPI in the Arduino framework. What are SPI pins in Arduino Uno? Arduino Uno has one SPI communication interface. Pin 10 (CS/SS), 11 (MOSI), 12 (MISO), and 13 (SCK). Where is the Arduino SPI library? There is a readily available library for SPI communication in the Arduino framework. To use the SPI library, you need to include #include <SPI.h> in your code. To use the SPI library, you need to consider the following points. - Enable SPI communication - Select mode as Master or Slave - Do you want to send Most Significant Bit (MSB) or Least Significant Bit (LSB) first? - Set clock idle as high or low - Set sample on the falling or rising edge of clock pulses - Speed of SPI communication What is the maximum speed of SPI? You can set the speed of SPI by setting parameters in SPISettings. The speed of SPI will depend upon the chip rate. For example, if you use a chip rated at 16 MHz, then use 16000000 in SPISettings. How do I connect two Arduinos with SPI? In this section, you will learn about interfacing of two Arduino Uno boards, Arduino Uno and Arduino Mega boards, for SPI communication. Make sure that the ground of two boards is common. –> Check out our guide to the Top 12 Best Arduino Online Courses Interfacing of two Arduino Uno Boards for SPI Interfacing of Arduino Uno Board and Arduino Mega Board for SPI How do you write an SPI code? To write a code for SPI communication between two Arduino boards, first, you need to set one board as Master and another as a slave. Following are the codes for setting Arduino Uno or Mega as Master and slave. Arduino Code for Master Mode # include <SPI.h> char str[ ]="Hello Slave, I'm Arduino Family\n"; void setup() { Serial.begin(115200); // set baud rate to 115200 for usart SPI.begin(); SPI.setClockDivider(SPI_CLOCK_DIV8); //divide the clock by 8 Serial.println("Hello I'm SPI Mega_Master"); } void loop (void) { digitalWrite(SS, LOW); // enable Slave Select // send test string for(int i=0; i< sizeof(str); i++) SPI.transfer( str [i] ); digitalWrite(SS, HIGH); // disable Slave Select delay(2000); } How the code works for Master Mode First, you need to include a header file to use the SPI library. # include <SPI.h> SPI.begin() function is based on the SPI bus by setting SCK, MOSI, and CS to Initialize as an output, pulling SCK and MOSI low, and SS high for Arduino boards. By default, the code work in Master mode. SPI.setClockDivider(divider) function is to Set the SPI clock divider relative to the system clock. The available dividers are 2, 4, 8, 16, 32, 64, or 128. The default setting is SPI_CLOCK_DIV4, which sets the SPI clock to one-quarter of the frequency of the system clock (5 Mhz for the boards at 20 MHz). Now, in the void loop, you need to set CS/SS pin as low to select the slave device, then you can transfer the data using SPI.transfer, and finally, in the end, set CS/SS pin as high to disable the slave connection. digitalWrite(SS, LOW); // enable Slave Select // send test string for(int i=0; i< sizeof(str); i++) SPI.transfer( str [i] ); digitalWrite(SS, HIGH); // disable Slave Select delay(2000); Arduino Code for Slave Mode # include <SPI.h> char str[50]; volatile byte i; volatile bool pin; void setup() { Serial.begin (115200); // set baud rate to 115200 for usart Serial.println("Hello I'm SPI UNO_SLAVE"); pinMode(MISO, OUTPUT); // have to send on Master in so it set as output SPCR |= _BV(SPE); // turn on SPI in slave mode i = 0; // buffer empty pin = false; SPI.attachInterrupt(); // turn on interrupt } void loop(){ static int count; if (pin) { pin = false; //reset the pin if(count++< 5){ Serial.print(count); Serial.print(" : "); Serial.println(str); //print the array on serial monitor if(count==5) { delay(1000); Serial.println("The end data"); } } delay(1000); i= 0; //reset button to zero } } // Interrupt function; } } How the code works for Slave Mode To use the Arduino board as a Slave device, you need to set the MISO pin as output and turn ON the Slave mode. Also, you need to attach interrupt so that you can handle the data received by the Master. pinMode(MISO, OUTPUT); // have to send on Master in so it set as output SPCR |= _BV(SPE); // turn on SPI in slave mode SPI.attachInterrupt(); // turn on interrupt Whenever an interrupt is generated due to data from the master device, in Slave, the pointer will jump to ISR with the address of SPI_STC_vect, and it will copy the data from SPDR to variable c and finally from c to the str array.; } } Finally, in the void loop, received data is printed on the serial terminal. Serial monitor output for Master and Slavedevices is shown below. Master device output: Slave Device output: How to use the RFID-RC522 module (RFID reader) with the Arduino UNO Now, I will provide you code and interfacing diagram of RFID (Radio Frequency Identification) with Arduino Uno. RFID is a system for transferring data over short distances. I will use the RC522 RFID reader and passive tags. Interfacing of RFID-RC522 Module with Arduino Uno Arduino Code for RFID-RC522 You need to install the RFID library in your Arduino IDE. Go to Tools > Manage Libraries. In the search box, type “mfrc” and install the MFRC522 SPI library. #include <SPI.h> #include <MFRC522.h> #define chipS 10 #define RESET 9 MFRC522 mfrc522(chipS, RESET); // Create MFRC522 instance. void setup() { Serial.begin(115200); // Initiate a serial communication SPI.begin(); // Initiate SPI bus mfrc522.PCD_Init(); // Initiate MFRC522 Serial.println("Scan your card to the RFID Reader..."); Serial.println(); } void loop() { if ( ! mfrc522.PICC_IsNewCardPresent()) // Look for new card { return; } if ( ! mfrc522.PICC_ReadCardSerial()) // Select one of the cards { return; } Serial.print("UID tag :"); // Show UID on serial monitor("Reader MSG : "); content.toLowerCase(); if ((content.substring(1) == "4c cd d2 22") ) //change here the UID of the card/cards { Serial.println("Valid member access"); Serial.println(); delay(2000); } else { Serial.print("Warning !!!!"); Serial.print(" : "); Serial.println("Invalid member access"); delay(2000); } } Serial Monitor Output: Conclusion In this article, I have shown you how to setup SPI communication between two Arduino boards and SPI read example for RFID with Arduino Uno board. I hope you found this article. LZ Sunday 27th of March 2022 "have to send on Master in so it set as output" Przecież dzieje się odwrotnie w tym kodzie - to master wysyła a slave odczytuje. Jak można takie nonsensy przepuszczać.
https://www.makerguides.com/master-slave-spi-communication-arduino/
CC-MAIN-2022-21
refinedweb
2,023
65.52
Sylvain Wallez wrote: > Geoff Howard wrote: .. >> Couldn't you also configure it to remove all ns without checking if >> you know your output should have no other namespaces (as usually the >> case with xhtml)? That way you could avoid buffering in that special >> but common case. > > > We can filter out all namespaces in HTML, but not in XHTML, as we can > have composite documents with foreing markup. Geoff refers to *common case*, which does not have any foreign markup, and you know this in advance. In this common case, it makes perfect sense to save CPU cycles and dump all but xhtml namespaces, and have configuration parameter like leave-one-ns-namespace-but-drop-all-others="" For non-common scenarios, with different embedded namespaces etc, more complicated handling should be used, total agreement here. For common-case HTML serialization, all namespaces could be dropped, including xhtml. Vadim
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200402.mbox/%3C4027E7DE.3070102@reverycodes.com%3E
CC-MAIN-2016-44
refinedweb
147
60.04
class StringTesting { public static void main(String args[]) { String str = "abcd"; String str1 = new String("abcd"); String str2 = str.substring(0,2); String str3 = str.substring(0,2); String str4 = str.substring(0,str.length()); String str5 = str1.substring(0,2); String str6 = str1.substring(0,2); String str7 = str1.substring(0,str1.length()); System.out.println(str2 == str3); System.out.println(str == str4); System.out.println(str5 == str6); System.out.println(str1 == str7); } } false true false true See the comments: String str = "abcd"; // new String LITERAL which is interned in the pool String str1 = new String("abcd"); // new String, not interned: str1 != str String str2 = str.substring(0,2); // new String which is a view on str String str3 = str.substring(0,2); // same: str3 != str2 String str7 = str1.substring(0,str1.length()); // special case: str1 is returned Notes: Special case when you call str1.substring(0,str1.length()); - see code: public String substring(int beginIndex, int endIndex) { //some exception checking then return ((beginIndex == 0) && (endIndex == value.length)) ? this : new String(value, beginIndex, subLen); } EDIT What is a view? Until Java 7u6, a String is basically a char[] that contains the characters of the string with an offset and a count (i.e. the string is composed of count characters starting from the offset position in the char[]). When calling substring, a new string is created with the same char[] but a different offset / count, to effectively create a view on the original string. (Except when count = length and offset = 0 as explained above). Since java 7u6, a new char[] is created every time, because there is no more count or offset field in the string class. Where is the common pool stored exactly? This is implementation specific. The location of the pool has actually moved in recent versions. In more recent versions, it is stored on the heap. How is the pool managed? Main characteristics: new String("abc").intern();) Sis interned (because it is a literal or because intern()is called), the JVM will return a reference to a string in the pool if there is one that is equalsto S(hence "abc" == "abc"should always return true).
https://codedump.io/share/CufZCbnjP9FV/1/how-does-java-store-strings-and-how-does-substring-work-internally
CC-MAIN-2018-09
refinedweb
360
69.38
Created on 2008-11-11 01:44 by mikecurtis, last changed 2011-06-27 15:00 by smarnach. This issue is now closed. Found in Python 2.4; not sure what other versions may be affected. I noticed a contradiction with regards to equivalence when experimenting with NaN, which has the special property that it "is" itself, but it doesn't "==" itself: >>> a = float('nan') >>> a is a True >>> a == a False >>> b = [float('nan')] >>> b is b True >>> b == b True I am not at all familiar with Python internals, but the issue appears to be in PyObject_RichCompareBool of python/trunk/Objects/object.c This method "Guarantees that identity implies equality". However, this doesn't "Gaurantee" this fact, but instead "Assumes" it, because it is not something that is always True. NaN is identical to itself, but not equivalent to itself. At a minimum, the contradiction introduced by this assumption should be documented. However, it may be possible to do better, by fixing it. The assumption appears to be made that identity should imply equivalence, for the common case. Would it therefore be possible to, instead of having objects such as lists perform this optimization and make this assumption, instead have the base object types implement this assumption. That is, for regular objects, when we evaluate equivalence, we return True if the objects are identical. Then, the optimization can be removed from objects such as list, so that when they check the equivalence of each object, the optimization is performed there. NaN can then override the default behavior, so that it always returns False in equivalence comparisons. Interesting, Python 3.0 behaves differently than Python 2.x. Nice catch! :) Python 3.0rc2 (r30rc2:67177, Nov 10 2008, 12:12:09) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> nan = float("nan") >>> nan is nan True >>> nan == nan False >>> lst = [nan] >>> lst is lst True >>> lst == lst False Python 2.6 (r26:66714, Oct 2 2008, 16:17:49) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> nan = float("nan") >>> lst = [nan] >>> lst == lst True >>> lst is lst True This is indeed interesting. For what it's worth, I think the Python 3.0 behaviour is the right one, here. Perhaps it's worth adding a test to Python 3.0 to make sure that this behaviour doesn't accidentally disappear, or at least to make sure that someone thinks about it before making the behaviour disappear. But in general I don't think the fact that NaNs are weird should get in the way of optimizations. In other words, I'm not sure that Python should be asked to guarantee anything more than "b == b" returning False when b is a NaN. It wouldn't seem unreasonable to consider behaviour of nans in containers (sets, lists, dicts) as undefined when it comes to equality and identity checks. There are other questionable things going on when NaNs are put into containers. For example: >>> a = float('nan') >>> b = [a] >>> a in b True A strict reading of the definition of 'in' would say that "a in b" should return False here, since a is not equal to any element of b. But I presume that the 'in' operator does identity checks under the hood before testing for equality. 'Fixing' this so that nans did the right thing might slow down a lot of other code just to handle one peculiar special case correctly. Michael, is this adversely affecting real-world code? If not, I'd be inclined to say that it's not worth changing Python's behaviour here. One more data point: in both 2.x and 3.x, sets behave like the 2.x lists: Python 3.0rc2+ (py3k:67177M, Nov 10 2008, 16:06:34) [GCC 4.0.1 (Apple Inc. build 5488)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> s = {float('nan')} >>> s == s True >>> s is s True I'm not happy with the 3.0 change. IMO, the best behavior is the one that allows code reviewers to correctly reason about the code by assuming basic invariants. In 2.6, all of the following are always true: assert a in [a] assert a in (a,) assert a in set([a]) assert [a].count(a) == 1 This is especially important because it lets us maintain a consistent meaning for "in" its two contexts: for a in container: assert a in container # this should ALWAYS be true This shows that "is" is essential in the interpretation of "in". If you loop over elements in a container, then by definition those exact elements are IN the container. If we break this, it will lead to hard to find errors and language inconsistencies. The "identity implies equality" rule isn't just an optimization, it is a deep assumption that pervades the language implementation. Lots of logic relies on it to maintain invariants. It looks like the 3.0 changes are fighting this state of affairs. IMO, those changes are fighting an uphill battle and will introduce more oddities than they eliminate. Oh h... Raymond, you are so right! I was too tired to think of all implications related to the different semantic in 3.0, especially the for in / is in invariant. I'm including Guido and Barry into the discussion. This topic needs some attention from the gods of old. :) [Raymond] > assuming basic invariants. In 2.6, all of the following are always > true: > > assert a in [a] > assert a in (a,) > assert a in set([a]) > assert [a].count(a) == 1 And these are all still true in 3.0 as well, aren't they? In any case, you've convinced me. I withdraw my comment about the Python 3.0 behaviour being the right one. Here is a tes case for 3.0 To be clear, I'm saying that PyObject_RichCompareBool() needs to add-back the code: /* Quick result when objects are the same. Guarantees that identity implies equality. */ if (v == w) { if (op == Py_EQ) return 1; else if (op == Py_NE) return 0; } When the above code was originally put in, there was discussion about it on python-dev and time has proven it to be a successful choice. I don't know who took this out, but taking it out was a mistake in a world that allows rich comparison functions to return anything they want, possibly screwing-up the basic invariants everywhere we do membership testing.. Consider that while PyObject_RichCompare() can return any object and can be used in varied and sundry ways, the opposite is true for PyObject_RichCompareBool(). The latter always returns a boolean and its internal use cases are almost always ones that assume the traditional properties of equality -- a binary relation or predicate that is reflexive, symmetric, and transitive. We let the == operator violate those properties, but the calls to PyObject_RichCompareBool() tend to DEPEND on them. --------------------------------------------------------------- P.S. Mark, those Py2.6 invariants are not still true in Py3.0: IDLE 3.0rc2 >>> a = float('nan') >>> a in [a] False >>> a in (a,) False >>> a in set([a]) True >>> [a].count(a) 0 >>> for x in container: assert x in container AssertionError >. I agree wholeheartedly. NaN comparison weirdness isn't anywhere near important enough to justify breaking these invariants. Though I imagine that if 'x == x' started returning True for NaNs there might be some complaints. > P.S. Mark, those Py2.6 invariants are not still true in Py3.0: You're right, of course. With the lines Raymond mentions restored, all tests (including the extra tests Christian's posted here) pass for me, and I also get: >>> a = [float('nan')] >>> a == a True Incidentally, it looks as though the PyObject_RichCompareBool lines were removed in r51533. All, Thank you for your rigorous analysis of this bug. To answer the question of the impact of this bug: the real issue that caused problems for our application was Python deciding to silently cast NaN falues to 0L, as discussed here: This would cause us to erroneously recognize 0s in our dataset when our input was invalid, which caused various issues. Per that thread, it sounds like there is no intention to fix this for versions prior to 3.0, so I decided to detect NaN values early on with the following: def IsNan(x): return (x is x) and (x != x) This is not the most rigorous check, but since our inputs are expected to be restricted to N-dimensional lists of numeric and/or string values, this was sufficient for our purposes. However, I wanted to be clear as to what would happen if this were handed a vector or matrix containing a NaN, so I did a quick check, which led me to this bug. My workaround is to manually avoid the optimization, with the following code: def IsNan(x): if isinstance(x, list) or isinstance(x, tuple) or isinstance(x, set): for i in x: if IsNan(i): return True return False else: return (x is x) and (x != x) This isn't particularly pretty, but since our inputs are relatively constrained, and since this isn't performance-critical code, it suffices for our purposes. For anyone working with large datasets, this would be suboptimal. (As an aside, if someone has a better solution for a general-case NaN-checker, which I'm sure someone does, feel free to let me know what it is). Additionally, while I believe that it is most correct to say that a list containing NaN is not equal to itself, I would hesitate to claim that it is even what most applications would desire. I could easily imagine individuals who would only wish for the list to be considered NaN-like if all of its values are NaN. Of course, that wouldn't be solved by any changes that might be made here. Once one gets into that level of detail, I think the programmer needs to implement the check manually to guarantee any particular expected outcome. Returning to the matter at hand: while I cringe to know that there is this inconsistency in the language, as a realist I completely agree that it would be unreasonable to remove the optimization to preserve this very odd corner case. For this reason, I proposed a minimal solution here to be that this oddity merely be documented better. Thanks again for your thoughts. [Michael] > the real issue that caused problems [...] was Python deciding to > silently cast NaN falues to 0L > [...] > it sounds like there is no intention to fix this for versions prior > to 3.0, Oh, <rude words> <rude words> <more rude words>! Guido's message does indeed say that that behaviour shouldn't be changed before 3.0. And if I'd managed to notice his message, I wouldn't have 'fixed' it for 2.6. Python 2.7a0 (trunk:67115, Nov 6 2008, 08:37:21) [GCC 4.0.1 (Apple Inc. build 5488)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> int(float('nan')) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot convert float NaN to integer :-(. [Imagine me looking extreeemely sheepish.] I guess I owe apologies to Christian and Guido here. Sorry, folks. Is there any way I can make amends? The issue with nan -> 0L was fixed in Python 2.6. I can't recall if I fixed it or somebody else but I know it was discussed. $ python2.6 >>> long(float("nan")) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot convert float NaN to integer >>> long(float("inf")) Traceback (most recent call last): File "<stdin>", line 1, in <module> OverflowError: cannot convert float infinity to integer Mark, thanks for checking that all the tests still pass. Please do add back the missing lines in PyObject_RichCompareBool(). It seems that their removal was not a necessary part of eliminating default sort-ordering. (Comment for Michael, everybody else can safely ignore it. Instead of writing: if isinstance(x, list) or isinstance(x, tuple) or isinstance(x, set): you can write: if isinstance(x, (list, tuple, set)): or even better: if hasattr(x, "__iter__"): Starting with Python 2.6 you can use "from math import isnan", too. ) Here's a patch incorporating Christian's tests, the missing lines from PyObject_RichCompareBool, and some additional tests to check that [x] == [x] and the like remain True even when x is a NaN. Oh, and a Misc/NEWS entry. With this patch, all tests pass (debug and non-debug 32-bit builds) on OS X 10.5/Intel. I think we should get the okay from Guido before committing this. Maybe he remembers why he deleted these lines in the first place. :-) The tests are passing on Ubuntu Linux 64bit, too. Good work everybody! Mark, the patch looks good. Thanks. Before committing, please add one other test that verifies the relationship between "in" in membership tests and "in" in a for-loop: for constructor in list, tuple, dict.fromkeys, set, collections.deque: container = constructor([float('nan'), 1, None, 'abc']) for elem in container : self.assert_(elem in container) > Before committing, please add one > other test that verifies the relationship between "in" in membership > tests and "in" in a for-loop: Added. (To test_contains, since this seems like a more appropriate place than test_float.) Reviewed the updated patch and all is well. Thanks. Mark, would you do the honor, please? Committed, r67204. Thanks guys, and thanks especially to Michael for spotting this before 3.0 final. The behaviour discussed in this thread does not seem to be reflected in Python's documentation. The documentation of __eq__() [1] doesn't mention that objects should compare equal to themselves. [1]: There are several places in the documentation that are wrong for NaNs; just one example is the documentation of sequence types [2], which states: > This means that to compare equal, every element must compare equal > and the two sequences must be of the same type and have the same > length. [2]: It's probably not worthwhile to "fix" all the places in the documentation that implicitly assume that objects compare equal to themselves, but it probably is a good idea to mention that __eq__() implementations should fulfil this assumption to avoid strange behaviour when used in combination with standard containers. Any thoughts?
http://bugs.python.org/issue4296
CC-MAIN-2016-30
refinedweb
2,414
64.3
After a long time, I returned to TopCoder. I forgot to write algorithm for programming contest such as TopCoder. But previously I realized that it is so important for me to write accurate and fast algorithm within finite time. In order to improve my programming skill again, I returned back to the TopCoder. SRM is a little hart to me, as first, I tried some practices. Today I solved SRM144 binary code problem. This problem decode messages recursively. For example, when you get the message "123210122", this is encode of "011100011". Suppose the first message is P, and second is Q. Now below equation is realized. P[i] = Q[i-1] + Q[i] + Q[i+1] With this recusive rule, you have to decode given message. My code is below. import java.util.*; import java.math.*; import static java.lang.Math.*; public class BinaryCode { public String[] decode(String message) { // Two answers should be solved // Each answer is correspond to Q[0] = 0 and Q[0] = 1 case. String[] ans = new String[2]; Integer start = 0; Boolean isOut = null; // Calculate two cases for (int i = 0; i < 2; i++) { // In the case of negative value is received, answer should be "NONE" isOut = false; start = i; // For improve speed performance, I use StringBuffer StringBuffer p = new StringBuffer(); // First and second factor cannot be put on inside loop bacause these are not the sum of three factors p.append(start.toString()); Integer p1 = Integer.parseInt(message.substring(0, 1)) - Integer.parseInt(p.substring(0, 1)); p.append(p1); // Decode each digit for (int j = 1; j < message.length(); j++) { Integer d = Integer.parseInt(message.substring(j, j+1)) - Integer.parseInt(p.substring(j, j+1)) - Integer.parseInt(p.substring(j-1, j)); if (d < 0) { ans[i] = "NONE"; isOut = true; break; } p.append(d.toString()); } // Last digit is not need to retained if (!isOut) { ans[i] = p.toString().substring(0, p.length()-1); } // This is guard. But I am not satisfied with this line :( if (message.length() == 1 && (message.charAt(0) == '2' || message.charAt(0) == '3')) { ans[i] = "NONE"; } } return ans; } } I wrote this code about 30 minutes. It is not enough to fight on SRM. And in addition to this, I am not satisfied with my algorithm expecially last clause. I don’t want to write exceptional logic as possible. If anyone write code about this problem, please inform me and give me a chance to look into your code. Thank you.Written on March 17th, 2014 by Kai Sasaki
https://www.lewuathe.com/topcoder/programming/srm144-div1.html
CC-MAIN-2018-26
refinedweb
418
69.58
Building the Right Environment to Support AI, Machine Learning and Deep Learning Watch→ #include <ctime> #include <cstdlib> #include <iostream> using namespace std; int main() { _tzset(); //set all timezone global variables cout<< "Difference between UTC and the\ local standard time: "<<_timezone<< " seconds."<<endl; cout<<boolalpha; //switch to literal bools: 'true' and 'false' cout<<"Is DST in effect? "<<(bool) _daylight<<endl; cout<<"Local standard timezone code is: "<<_tzname[0]<<endl; cout<<"Local alternate timezone code is: "<<_tzname[1]<<endl; } Difference between UTC and the local standard time: -7200 seconds. Is DST in effect? false Local standard timezone code is: Jerusalem Standard Time Local alternate timezone code is: Jerusalem Standard.
http://www.devx.com/cplus/10MinuteSolution/34030/0/page/3
CC-MAIN-2020-16
refinedweb
107
51.78
In our previous posts we learned how to ; Today in this post we learn the following new stuff while developing the project mentioned in title; Objective Our objective in this post to create such a system which measures ambient temperature and sends it to PC using Serial Port. To fulfill this we need the following major components ; Analog Digital Conversion in PIC Microcontrollers Generally PIC Microcontrollers offer 10 Bit of Analog to Digital Conversion. This means that when they measure an incoming voltage, they compare that voltage to a reference voltage, and give you the comparison represented as a 10-bit number (0-1023). You can wire as many analog inputs to the PIC as there are analog ports to accept them. However, you will notice some strange effects when you do so. The ADC circuits will affect each others’ readings, because all the circuits are pulling off the same +5V source. One thing you can do to minimize this is by using decoupling capacitors by each analog input, to smooth out dips and surges in the current caused by the other ADC’s. Typically 0.1µF to 1µF will do the job. Place the capacitors between power and ground, as physically close to the ADC input as you can get. It also helps if you increase the sampling time and the delay between conversion and reading commands. Too much delay and you sacrifice interactivity; too little delay and you get random numbers. Start with the numbers in the sample code above, and vary them until you get readings you’re happy with. Finally, it helps to sample at a lower resolution. Sampling at 8 bits instead of 10 will improve the speed and stability of a multiple ADC program. Since the ADC registers provide you with 10 bit number converted from the analog input, you can use the following formula to have input voltage; LM35 – Precision Centigrade Temperature Sensor LM35 is an integrated circuit sensor that can be used to measure temperature with an electrical output proportional to the temperature in cenitgrade. The main reason behind using LM35 is that it doesn’t need amplification like thermocouples. The conversion ratio of LM35 is 10mV/’C . For e.g at temperature of 35 degrees LM35 will output 350 mV. MAX232 – RS232 Transceiver MAX232 is used for level translation between TTL and RS232. It is very common and integral component while communication between Microcontrollers and PC using RS232. The USART of PIC works on0-5V levels while RS232 works over -15 to +15 level. The MAX232 acts as a bridge between these two standards. Project By combining the above three major components along with the power of CCS C Compiler we achieve our objectives. Hardware Following is the schematic of whole project; The output of LM35 is attached to PIN A0 which is a analog input. Pin C6 & C7 are dedicated for USART as Tx and RX respectively. MAX232 sits between RS232 and Controller to translate their voltages. A DB9 connector is used to connect PC with Controller. Software Following are the steps from the software we will design for the project; Steps 4 to 6 will be performed under an infinite loop. But a delay will be added explained later. 1.Define ADC Resolution #DEVICE ADC=10 The above statement is used to define the resolution of ADC in bits. The maximum resolution for PIC16F876 is 10-Bit. But you can define it to any number less than the maximum resolution. 2. Initialize and configure USART with RS232 settings. The following line is used to initialize and configure USART to start communication. #use rs232 (baud=57600,rcv=PIN_C7, xmit=PIN_C6) This line is used just after the oscillator clock declaration.Usually you have to give it Baudrate and the pins used for communication. If you want to use the builtin hardware USART then you use the above declared pins. But if you like more than one serial communication ports, you can declare communication on any pins. But note that using pins other than C6 and C7 will involve software serial data handling. And it can compromise speed as compared to hardware USART. 3.Initialize and Configure ADC setup_adc_ports( ALL_ANALOG ); setup_adc(ADC_CLOCK_INTERNAL ); set_adc_channel( 0 ); These three lines setup the ADC and initializes it. setup_adc_ports is used to define which pins are to be used as analog inputs. Most suitable is ALL_ANALOG as it makes all AN pins as analog. The next statement defines the clock source for ADC. As PIC also has external clock feature for ADC conversion. The best is internal clock. To select the pin from which to read the analog voltages we use set_adc_channel . In our case LM35 is connected to Pin AN0, so we select Channel 0. 4.Reading ADC Value value = read_adc(); The function read_adc() is used to read the value from ADC Register. It contains the digital equivalent value of analog input. For e.g. if we input 2.5 volts and the resolution defined for ADC is 10-Bit, we will get a value of 512 from read_adc(). 5. Conversion into Centigrade As we know that we get a digital value equivalent to in out analog voltage. So first we convert it into voltage value; Please note that by default the reference voltage for ADC in PIC is 5V. For e.g if we get a value of 512 from ADC of 10-Bit resolution, the formula will have following values ; We have got the voltage value which is coming from ADC, now we convert it into Temperature with help of formula provided by LM35; If voltage of 0.375 are read from ADC, the above formula will return Temperature of 37.5°C. Implementing both the equations in C ; volt = (value / 1023) * 5; temp = volt * 100; Read and conversion of voltages into digital and then temperature is done. In next we will send the data to PC using very simple command. 6.Send temperature reading to PC over USART. The very reason of popularity of CCS C Compiler is its ease of use and powerful commands. Specially dealing with serial communication is very easy in this compiler. To send data to PC all you need to do is to use the following one line function; printf(“Temperature: %.1f\n\r”,temp); If you have used TurboC you maybe very well familiarized with printf. This function is the same olf function but the only difference is that when used in TurboC it displays on Monitor and when used in PIC it sends data over to serial port. The end result of this function at serial port would be; Temperature: 37.5 The Complete Code Combining all the parts together we have the following code; #include <16F876.h> #DEVICE ADC=10 #fuses HS,NOWDT,NOPROTECT,NOLVP #use delay(clock=20000000) #use rs232 (baud=57600,rcv=PIN_C7, xmit=PIN_C6) float value; float temp; float volt; void main() { //Initialize and Configure ADC setup_adc_ports( ALL_ANALOG ); setup_adc(ADC_CLOCK_INTERNAL ); set_adc_channel( 0 ); while(1) { //1 Sec Delay delay_ms(1000); //Read ADC Value value = read_adc(); //Convert Value into Volts volt = (value / 1023) * 5; //Convert Volts into Temperature temp = volt * 100; //Send data to PC printf(“Temperature: %.1f\n\r”,temp); } } You can use Hyper Terminal which comes with Windows XP. Or you can use many other serial port monitors easily available for download. Following is the screenshot of Hyper Terminal being sent the temperature data; In our next post we will learn how to make temperature readings more accurate by using filters and different techniques. Also we will learn how to recieve the serial data in Visual Basic 6 for further programming and GUI. Happy InterfacING =)
http://creativeelectron.net/blog/2009/10/pic-pc-interfaced-digital-thermometer-using-pic-microcontroller/
CC-MAIN-2018-51
refinedweb
1,274
62.48
So far I have managed to make the ball bounce around the form and I have created a bat. All I need is 1 brick and I don't know how to create it, I know that I need an array but I don't know where to start. Also the ball just randomly jumps around the form each time therefor I want to make it disappear when It Misses the bat at the bottom of the form. The thing I am stuck on right know is collision detection method which enables me to make the ball hit the bat and when the ball misses the bat at the bottom of the form it disappears. // Here is my code. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace Ball_Break_out { public partial class Form1 : Form { Graphics paper; Pen pen; private int x, y; int xBat; private Random ranNum; private int xchange, ychange; int bricky; public Form1() { InitializeComponent(); pen = new Pen(Color.White); paper = picDisplayBat.CreateGraphics(); pen = new Pen(Color.DarkRed); pen.Width = 5; pen.Width = 10; ranNum = new Random(); // to create a random num picDisplayBat.MouseMove += new System.Windows.Forms.MouseEventHandler(p… paper = picDisplayBat.CreateGraphics(); pen.Width = 3; } private void picdraw_MouseMove(object sender, MouseEventArgs e) { paper.Clear(Color.White); paper.DrawRectangle(pen, e.X + 10, picDisplayBat.Height - 20, 50, 10); xBat = e.X; } private void timer1_Tick(object sender, EventArgs e) { x = x + xchange; y = y + ychange; if (x >= picDisplayBat.Width) xchange = -xchange; if (y >= picDisplayBat.Height) ychange = -ychange; if (x <= 0) xchange = -xchange; if (y <= 0) ychange = -ychange; paper.Clear(Color.White); paper.DrawEllipse(pen, x, y, 10, 10); paper.DrawRectangle(pen, xBat + 10, picDisplayBat.Height - 20, 50, 10); // Drawing a rectangle shape for the xBat. paper.DrawRectangle(pen, bricky + 20, picDisplayBat.Height - 50, 30, 10); // A brick to check for coilision detection. } private void btnStartBouncing_Click(object sender, EventArgs e) { timer1.Interval = 60; timer1.Enabled = true; x = ranNum.Next(1, picDisplayBat.Height); y = ranNum.Next(1, picDisplayBat.Width); xchange = ranNum.Next(1, 10); ychange = ranNum.Next(1, 10); } private void btnStopBouncing_Click(object sender, EventArgs e) { timer1.Enabled = false; paper.Clear(Color.White); } } } This post has been edited by Martyr2: 27 November 2011 - 11:41 AM Reason for edit:: Please use code tags in the future, thanks! :)
https://www.dreamincode.net/forums/topic/258268-help-with-breakoutbrickout-game/
CC-MAIN-2018-26
refinedweb
392
63.05
I have a string like this: key: value How would I best split the line into key and value variables on the first occurence of :, possibly followed by a single space? I am looking for something like this in Perl: my ($key, $val) = split(/: ?/, "key: value") So far I have only came up with this slightly nightmarish version: func keyValueSplit<T: StringProtocol>(_ str: T) -> (String, String)? { guard let sep = str.firstIndex(of: ":") else { return nil } let rest = str[str.index(after: sep)] == " " ? str.index(sep, offsetBy: 2) : str.index(after: sep) let fst = str.prefix(upTo: sep) let snd = str.suffix(from: rest) return (String(fst), String(snd)) }
https://forums.swift.org/t/split-line-on-a-separator-with-optional-whitespace/16859
CC-MAIN-2019-35
refinedweb
109
78.04
Open Open Source GPS Open Source GPS Open Source GPSToolKit The goal of the GPSTk project is to provide a world class, open source computing suite to the satellite.... Open Source GPS Software Working with GPS Open Source Directory Open Source Directory Open Source Java Directory The Open Source Java...; Open Source Directory Services Apple's Open Directory... with open source directory Red Hat Directory Server began life as the Netscape Open Source Jobs , and then we will focus on Quartz, an open source library for those who need some extra... need Quartz: the first full-featured, open source job scheduling framework...Open Source Jobs Open Source Professionals Search firm Open Source E-mail Server is an open source project intended for developers who need to stream QuickTime and MPEG-4...Open Source E-mail Server MailWasher Server Open Source MailWasher Server is an open-source, server-side junk mail filter package Open Source VoIP not be exactly how you wanted it). Open Source E-Book Reader...Open Source VoIP/TelePhony Open source VoIP/Telephony One of the first open source VoIP projects -and one of the earliest VoIP PBXes Open Source Metaverses Open Source Metaverses OpenSource Metaverse Project The OpenSource Metaverse Project provides an open source metaverse engine along the lines... of an emerging concept in massively multiplayer online game circles: the open-source Open Source Books . This book should appeal equally to .NET developers interested in what open source... about .NET. Open Source PKI Book This project... you crave to master Visual Basic .NET 2003. Open Source, Book Open Source Groupware solutions Simple open source groupware Need groupware... Open Source Groupware Open Groupware Open Group... Software This list is focused on open source software projects relating Open Source XML Editor Open Source XML Editor Open source Extensible XML Editor... * multi-platform (Java 1.3+) * free open-source software  ... powerful XML editing features . Open Source XML Database Toolkit Hibernate Book Hibernate practically exploded on the Java scene. Why is this open-source tool so... need to start working with Hibernate now. The book focuses on the 20% you... of the open source lightweight Hibernate 3 and its new features. Authors Dave Open Source Testing we need a testing methodology, why use open source, the value of certifications...Open Source Testing Open Source Security Testing Methodology Manual The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-reviewed Open Source c++ Open Source c++ Open Source C++ During our professional experience... exclusively composed of objects. Open Source C... code is included. Open Source C++ GUI Programming Qt3 Open Source Content Management Open Source Content Management Introduction to Open Source Content Management Systems In this article we'll focus on how Open Source and CMS combine...) has seen particularly strong growth in open-source solutions, perhaps Open Source Outlook , vertically locked down world; we need an open source solution for extracting... Open Source Outlook Open Source Outlook Sync Tool Calendaring... possibilities for Linux. The Open-Source Outlook Connector Open Source PHP Open Source PHP Open-source PHP framework AjaxAC is an open-source... for 36 languages and 7 databases. of course - open source" - PHProjekt is free...; Open Source licensing for PHP scripts The main purpose Open Source Exchange and there was clearly a need to facilitate this if the Open Source movement was going...Open Source Exchange Exchange targeted by open-source group A new open-source effort dubbed Open Source Images Open Source Images Open Source Image analysis Environment TINA (TINA Is No Acronym) is an open source environment developed to accelerate... the development of key open source software within Europe and represents clear recognition C++GraphicsTutorials is an open source source code level library for loading and saving various image types... C++ Graphics Tutorials  ... in this document is correct. C/C++ Windows programmers who want to learn C++Tutorials to learn this strange method of programming. Dont forget, use a decent C++ book as reference. U will need a C++ compiler of some sort, if you are using... level programmers with source code downloads available. Some knowledge of C Browser Open Source Browser Building an Open Source Browser One year ago -ages ago by Internet standards- Netscape released in open source... browser. Based on KHTML and KJS from KDE's Konqueror open source project Open Source web Templates Open Source web Templates Open Source Web Templates A web site... site to the zip file. Open Source Template Engines... navigation and secondary navigation links Open Source Web Design Open Source Blog Open Source Blog About Roller Roller is the open source blog server...; The Open Source Law Blog This blog is designed to let you know about developments in the law and business of open source software. It also provides Open Source Servers Open Source Servers Open Source Streaming Server the open source...; these tools are not available as part of the open source project. Technical..., and SUSE Linux Enterprise Server, an award-winning open-source server for delivering program Open Source program Applications for Open Sound System SLab Direct... or display functions loaded at run time. Open source sound project The l.o.s.s project promotes and supports the use of free, open source Institute Open Source Institute Open Source Software Institute The Open Source Software Institute (OSSI) is a non-profit (501 c 6) organization... is to promote the development and implementation of open-source software solutions Open Source E-mail or evolve as the need arises. Open source... Mail DotNetOpenMail is an open-source library written in C...# and Win Forms. It is a freely available open-source component written in C Phone Book Midlet Example ; This Example goes to create a Phone Book MIDlet This example illustrates how to create your phone book. In this example we are taking three SCREEN... button then number text box will open, now user enters phone number. In case Open Source Java Database , an open source, object-oriented embedded database for Java and C#. In addition to high...Open Source Java Database An Open Source Java Database One$DB is an Open Source version of Daffodil DB, our commercial Java Database. One$DB Open Source Software Open Source Software In this section we are discussing Open source software... Open source software available on the internet that can be used in the company. Open source software is very useful because source code also distributed Open Source SQL Open Source SQL Open Source SQL Clients in Java SQuirreL SQL Client.... The minimum version of Java supported is 1.3. iSQL-Viewer is an open-source JDBC 2.x... out common database tasks. Open Source: Data from MS SQL IBM Open Source IBM Open Source Open source: IBM's deadly weapon Once... working on open-source software free and unfettered access to innovations... is that the open-source software for which the patented innovations are used must C/C++ Programming Books for Visual C++ 6 programming. This book skips the beginning level material and jumps... its predecessor, Visual C++ 5.0. The new features covered in this book follow... you need to build industrial-strength applications with Qt 3.2.x and C Struts Book - Popular Struts Books Jakarta Struts 1.2 with New Book Jakarta Struts is an Open Source Java framework... sophisticated Struts 1.1.This book covers everything you need to know about... Struts is an open-source framework that integrates with standard Java hi need help with C++ programming! hi need help with C++ programming! how do i : b. Store numbers / characters into the file (output) c. Retrieve numbers / characters from the file...; "Unable to open file"; return 0; } c) #include <iostream> #include < Open Source Billing System Open Source Billing System Open source billing system is the new trend in the way of billing management. Due to open...; Users can handle these open source software in a very easy manner and deal Mac OS X Open Source Mac OS X Open Source Mac os X wikipedia Mac OS X was a radical.... Open Source Mac OS X Server Mac OS X Server... around the Mach micro kernel and the latest advances from the open source BSD Book Bank Book Bank Dear Sir, Could you send me the java codings for BOOK BANK.Its very urgent, please Hi Friend, Try the following code... static void main(String[]args)throws Exception{ File f=new File("C Open Source Business Model Open Source Business Model What is the open source business model It is often confusing to people to learn that an open source company may give its... that open source companies do not generate stable and scalable revenue streams Open Source Movement reviews the development and need for Free and Open Source movements in software...Open Source Movement Open source movement Wikipedia The open source movement is an offshoot of the free software movement that advocates Open Source Database Open Source Database Open Source... Source Java Database One$DB is an Open Source version of Daffodil... editions. Open-Source Database Open Source Accounting Software and need a proper evaluation when making computer decisions. While open source...Open Source Accounting Software Open Source Accounting Software TurboCASH .7 is an open source accounting package that is free for everyone Wiki that choice if the need arises. The open source wiki behind...Open Source Wiki Open Source Wiki Roundup The purpose of this article... in the growth of Wikipedia. SourceLabs Open-Source Catalog Boasts HTML Editor Open Source HTML Editor Open Source Bluefish HTML Editor Bluefish... at the screenshots, or download it right away. Bluefish is an open source development...). Open Source HTML Editor for Linux and Windows Nvu MIT Open Source MIT Open Source Open Source at MIT The goal of this project is to provide a central location for storing, maintaining and tracking Open Source... open-source story brings bloggers in Wouldn't you know it? I finally caught Open Source PDF Open Source PDF Open Source PDF Libraries in Java.... Open Source Development with CVS The complete Open Source... pages. Open Source Software Open Open Source Games Open Source Games Playing the Open Source Game In this article I... of software. I will suggest a few reasons why an open source approach could... a commercial quality game. Five addictive open source Open Source Code . Open Source Code and Frameworks Here are some C and C... Open Source Code What is SOAP SOAP is a protocol for exchanging...; Open Source Search Engines Open source search engines allow participants Open Source Open Source ERP Open Source ERP Integrated ERP & CRM Solution Compiere?s ERP.... Open Source ERP & CRM Software Compiere is a Open Source ERP software application with fully integrated CRM software Open Source PIM Open Source PIM Open Source PIM Software Being organised is something...; The Battle of the Open Source PIM Alright. Got my...-Source PIM Due In Spring Lotus founder Mitch Kapor said his open-source Open Source Hardware Open Source Image GNU Image...; Open Source Image Analysis Environment TINA Is No Acronym (TINA ) is an open source environment developed to accelerate Open Source JVM , or if you need a virtual machine as an integral component of an open source or free...Open Source JVM Java Virtual Machine or JVM for short is a software... Source JVMs. Wonka -- an open source embedded JVM & for the application that is built on MySQL. You do not need to release the source code Linux Books the source code of the software that we need. This book is also intended... whatever need you have. BLFS is the book that takes you down your own custom path. You... as 'open source'.   Top 10 Open Source Software Until now open source software were the trendsetters in the modern day... and greatest success stories of software field belong to this category of open source software. Open source software are typically developed and licensed under a open... Source POS Project This project is to create an open-source Point of Sale Open Source content Management System Open Source content Management System The Open Source Content Management System OpenCms is a professional level Open Source Website Content... existing modern IT infrastructure. OpenCms runs in a "full open source" environment How To Use Open Source Fonts How to use of Open source fonts is growing among people day by day... as they have become common now. People need something new and exciting and Open Source fonts are giving them that freedom. There are thousands of Open Source C file open example C file open example This section demonstrates you how to open a file in C. The header...++ IDE and my source file is in the C:\TC\BIN directory and the output file Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/99539
CC-MAIN-2016-07
refinedweb
2,134
65.32
This_definition, and table_constraint definitions. We’ll include them here as well to prevent the need to flip to and fro between the two articles. The full PDFs for the syntax diagrams are supplied so that they can be printed out to any size. The versions in the articles are just thumbnails, and can also be clicked on for better viewing. - ALTER TABLE overview - ALTER TABLE column - ALTER TABLE miscellaneous - Table Constraint - Column Constraint - Computed Column Definition - Column Definition ALTER TABLE overview syntax The ALTER TABLE DLL statement allows you to - ALTER a column by changing its data type, size , collation, NULLity, and so on. - ADD one or more columns or constraints - DROP a constraint or column - ENABLE or DISABLE a trigger, constraint, change tracking, or filetable namespace - Either CHECK or NOCHECK (disable or enable) a constraint - SWITCH a partition to a different table - SET filestream on, the filetable directory, or lock escalation to either ‘auto’ or ‘table’, or disable it - REBUILD a partition to change the compression We can summarise this in the following Railroad diagram. (See here for an explanation of the syntax) Most of these operations are straight-forward but others will have repercussions. These repercussions can be for data, views, indexes, constraints, statistics or calculated columns. Data modification is one common repercussion. The ALTER COLUMN, in particular, could easily require changes to data if the table contains any rows. This starts immediately, requires a schema-modify lock on the table and the changes are logged recoverable. This can cause problems in a working database, particularly when a column is added or dropped in a large table. The rollout of a database refactoring to production can be painful in a database of any serious size. If, for example, you wish to add a column to a large and important table, then every other process that needs to access that table will be blocked for the duration of the modification. Other ALTER operations on large tables, such as changing the clustered index, can take a long time. In some cases it is just too difficult to do the operation, and so some types of modifications are forbidden. There is, for example, no RENAME keyword because the mopping-up exercise in finding and changing all references to the column is just too complicated. (you have to do things the hard way with sp_rename). This means that there are some rather complicated rules about what you can alter in a column or constraint. Many of these rules are common-sense, but other rules for altering a component of a table are rather difficult to follow, and so I can only give a brief summary. If a column participates in a schema-bound view, you cannot change that column. You can’t alter a column if it has a timestamp data type, is a computed column or used in a computed column, is the ROWGUIDCOL for the table, or is used in a primary key or foreign key constraint. Only varchar, nvarchar, or varbinary data types that are having their length increasedcan be altered if they are being used in an index, check constraint, or are used in custom statistics. Columns that are associated with a default definition can only have theirlength, precision, or scale of a columnchanged. The data type of a column in a partitioned table can’t be altered. If you change the length, precision, or scale of a column with the ALTER COLUMN phrase, and data already exists in the column, the new size must be sufficient to hold the largest data item in the column. Although you can alter the length of a column if it is involved in an index, a primary key or foreign key constraint, or explicit statistics, you can’t change its type. If the column is part of a PRIMARY KEY constraint then only a varchar, nvarchar, or varbinary column can be altered in size. You are allowed to drop a column as long as there are no existing indexes, table constraints, defaults or views using that column. It can’t be dropped if it is bound to a rule. When you drop a column, any associated automatic statistics are dropped at the same time. Indexes can’t be dropped directly, but they are dropped automatically if they are associated with a constraint that is then dropped. When you are dropping a constraint that involves a clustered index, you can avoid disruption to other processes if you specify the ONLINE-ON option (enterprise edition only) so that the necessary rebuild does not block queries and modifications to the underlying data and associated non-clustered indexes. Before doing so, make sure that the index is enabled. The problems with NULL and NOT NULL It is important to specify explicitly the nullability (NULL or NOT NULL) of a column. Never rely on a default. Changing the nullability (whether the column accepts nulls) of a column can cause frustration. If the table has rows, then you can only change from NULL to NOT NULL if you specify a DEFAULT value. If you are dealing with a computed column, then you can make it NOT NULL only if it is PERSISTED. When you use the ADD COLUMN phrase, the problem is what do with existing rows in the table. If your column is NULL,then it is filled with NULLs, even if there is a default, since NULL is a valid value. You need to add the WITH VALUES phrase to the default to fill the column with particular values. Otherwise, if your column is NOT NULL, you have to include a default constraint and the value in the default constraint is then used. If you neglect to add the default constraint, then the operation will fail. Alter Table Column syntax Disabling constraints By using the NOCHECK keyword with a newly-added or re-enabled FOREIGN KEY or CHECK constraint as, for example, in… … the constraint is disabled so that the data in the table is not validated against it. Although this might help in rare cases, the query optimizer does not consider any constraints that are defined WITH NOCHECK since they are flagged as ‘is_not_trusted’, so queries are likely to run slower They have to be enabled by a subsequent ALTER TABLE statement. The default setting for new constraints is WITH CHECK, but WITH NOCHECK is assumed for re-enabled constraints unless you explicitly add WITH CHECK. Although it is useful for bulk import of data to temporarily disable constraints, these must be re-enabled after the operation. If you do not want, for some reason, to verify new CHECK or FOREIGN KEY constraints against existing data, use WITH NOCHECK. However, the WITH NOCHECK does not affect later data updates which will fail if the inserted data does not comply with the constraint. The safest approach is to check all data before a constraint is altered and always specify WITH CHECK, but if you use WITH NOCHECK, then fix all potential non-compliant data, perhaps check with … …in order to test that the data meets the conditions of the constraint, or the referential integity in the case of a foreign key, the and enable all constraints with … to allow SQL Server to check the validity of the constraint and flag it as ‘trusted’ Alter Table Components syntax/span> The Column Definition syntax Column Constraint syntax Computed Column Definition Table Constraint Conclusions Because the ALTER TABLE is used for so many different purposes, it is difficult to produce a clear overview, especially since SQL Server has produced a number of extensions that are occasionally rather clumsy for the purposes of managing compression, filestreams and partitions. Even so, I found that producing these diagrams helped my understanding of the ALTER TABLE syntax and I often refer to them. I very much hope you’ll find them as useful. I don’t believe that even the casual user of relational databases should entirely avoid using SQL for doing DDL in favour of using a GUI. The moment I find myself going ‘off-piste’ with altering tables, I find that it is usually easier and safer to work with the syntax diagrams and use TSQL rather than to have to rely on using SSMS to alter tables, or to reverse-engineer the syntax from the examples provided; especially if you have SQL Prompt as well as a spirit guide. I’d like to keep this set of diagrams updated, so please let us know if you spot any errors and I’ll do my best to fix them. Acknowledgements Many thanks to the pioneers of the railroad diagram. Also the MSDN ALTER TABLE page from where the explanatory quotes within the diagrams were taken and adapted.
https://www.red-gate.com/simple-talk/sql/t-sql-programming/sql-server-alter-table-syntax-diagrams/
CC-MAIN-2018-05
refinedweb
1,457
54.86
Hi, I would like to check if a given comand queue is valid. (I have a lot of host code, some functions taking a command queue as a parameter, and I had a bug where I gave an unitialised command queue, so I would like to avoid such a bug in the future). Anyway, I have tried a simple: Code :bool queueValid(const cl_command_queue& queue) { size_t size; cl_int err = clGetCommandQueueInfo(queue, CL_QUEUE_DEVICE, 0, NULL, &size); return (err == CL_SUCCESS); } But it turns out that calling clGetCommandQueueInfo on an unitialised queue segfault. (AMD APP SDK 2.7) I have seen a previous post about "Check whether cl_mem object is valid": but it does not seem to apply to queues. Any pointer ? (no pun intended!) Thanks a lot in advance. Seb
http://www.khronos.org/message_boards/showthread.php/8517-Checking-wether-a-command-queue-is-valid?p=27827&mode=threaded
CC-MAIN-2014-15
refinedweb
128
78.69
IRC log of CSS on 2009-10-21 Timestamps are in UTC. 15:47:46 [RRSAgent] RRSAgent has joined #CSS 15:47:46 [RRSAgent] logging to 15:55:22 [Zakim] Style_CSS FP()12:00PM has now started 15:55:30 [Zakim] +dsinger 15:55:54 [dsinger] dsinger has joined #css 15:56:41 [dsinger] zakim, who is on the phone? 15:56:41 [Zakim] On the phone I see dsinger 15:56:56 [dsinger] zakim, mute me 15:56:56 [Zakim] sorry, dsinger, muting is not permitted when only one person is present 15:58:18 [dsinger] zakim, who is on the phone? 15:58:18 [Zakim] On the phone I see dsinger 15:58:45 [dsinger] zakim, who is making noise? 15:58:52 [Zakim] +plinss 15:58:57 [Zakim] dsinger, listening for 10 seconds I could not identify any sounds 15:59:02 [dsinger] zakim, mute me 15:59:02 [Zakim] dsinger should now be muted 15:59:19 [Zakim] +??P24 15:59:26 [fantasai] Zakim, ? is fantasai 15:59:26 [Zakim] +fantasai; got it 16:00:01 [Zakim] +[Microsoft] 16:00:29 [Zakim] +TabAtkins 16:00:35 [sylvaing] Zakim, [Microsoft] has sylvaing 16:00:35 [Zakim] +sylvaing; got it 16:01:36 [glazou] VoIP dead :( 16:01:47 [Zakim] +glazou 16:02:09 [Zakim] +David_Baron 16:02:13 [dbaron] Zakim, mute David_Baron 16:02:13 [Zakim] David_Baron should now be muted 16:02:28 [ChrisL] ChrisL has joined #css 16:03:06 [fantasai] dbaron, ??? 16:04:27 [TabAtkins] Hey, I have great voip. Your service just sucks. 16:04:32 [Zakim] +Bert 16:04:34 [glazou] sylvaing: bill ? not even, free.fr, 18Megabits, free calls to landlines in 100 countries 16:04:45 [dsinger] dsinger has changed the topic to: conversing silently and semantically 16:04:56 [plinss] free doesn't help if it doesn't work... 16:05:08 [ChrisL] yes, but not necessarily a "real phone" in ters of the connection 16:05:13 [glazou] yeah, that's the 1st time they have a network issue 16:05:18 [glazou] on voip 16:05:33 [dsinger] dsinger has changed the topic to: CSS working group 16:05:40 [szilles] szilles has joined #css 16:05:57 [Zakim] +ChrisL 16:06:48 [fantasai] 16:06:50 [Zakim] +SteveZ 16:07:01 [fantasai] 16:08:11 [fantasai] 16:09:18 [dbaron] Zakim, unmute David_Baron 16:09:18 [Zakim] David_Baron should no longer be muted 16:10:11 [fantasai] ScribeNick: fantasai 16:10:19 [fantasai] dbaron: I don't think this should be done for ARIA 16:10:22 [dbaron] Zakim, mute David_Baron 16:10:22 [Zakim] David_Baron should now be muted 16:10:26 [fantasai] dbaron: I think ARIA should remain descriptive 16:10:57 [fantasai] [discussing :checked pseudo-class and ARIA] 16:11:56 [fantasai] My answer is that it's out-of-scope for Selectors 16:12:06 [fantasai] Whether a node is checked is defined by the document language 16:12:13 [fantasai] whether that's defined by the HTML4 spec 16:12:24 [fantasai] or HTML5+DOM+SVG+ARIA+something-else 16:13:25 [fantasai] People seem to agree with this 16:14:30 [fantasai] Chris: We asked them a question about ::before and ::after, they don't seem to have responded. 16:15:33 [fantasai] fantasai: so who should respond? 16:15:35 [fantasai] Daniel: you 16:15:43 [fantasai] Bert: Make sure they know it is an official response 16:16:01 [fantasai] ACTION fantasai respond to Janina 16:16:01 [trackbot] Created ACTION-181 - Respond to Janina [on Elika Etemad - due 2009-10-28]. 16:16:19 [fantasai] 16:16:59 [fantasai] fantasai: This was raised against implementors, actually, but filed against Selectors 16:17:14 [Zakim] +CesarAcebal 16:17:18 [fantasai] fantasai: I think it's out-of-scope for Selectors, just wanted to check that that's acceptable 16:17:29 [CesarAcebal] CesarAcebal has joined #css 16:18:01 [fantasai] Tab: Seems fine to me 16:18:03 [dbaron] Zakim, unmute David_Baron 16:18:03 [Zakim] David_Baron should no longer be muted 16:18:14 [dbaron] Zakim, mute David_Baron 16:18:14 [Zakim] David_Baron should now be muted 16:18:26 [fantasai] dbaron: I'm not sure how reasonable Anne's suggestion will be for our UA style sheet, but that's another story 16:18:48 [fantasai] Peter: No objections to treating as out-of-scope 16:19:05 [fantasai] Issue 1. 16:19:05 [fantasai] Summary: Remove suggestion to use \: to match namespaced XML in down-level UAs 16:19:05 [fantasai] From: Krzysztof Maczynski 16:19:05 [fantasai] Comment: 16:19:05 [fantasai] Response: 16:19:08 [fantasai] Comment: 16:19:23 [dsinger] dsinger has joined #css 16:19:50 [fantasai] fantasai: Anne suggested removing the entire section. I don't mind 16:20:16 [fantasai] Peter: I agree as well, and I original wrote the section 16:20:39 [fantasai] Peter: It was mainly an argument to Microsoft 10 years ago explaining why escaped colons was a bad way to handle it and that we needed a Namespaces spec 16:20:47 [fantasai] ACTION fantasai remove Chapter 11 16:20:47 [trackbot] Created ACTION-182 - Remove Chapter 11 [on Elika Etemad - due 2009-10-28]. 16:21:28 [fantasai] fantasai: THat leaves 3 more issues, all on defining things defined in 2.1 but not here 16:22:21 [fantasai] fantasai: I don't have any concrete proposals here 16:22:34 [fantasai] Bert: Point to CSS2.1 and HTML and XML specs 16:22:37 [ChrisL] q+ to ask a point of clarification about :indeterminate pseudo-class 16:23:38 [Zakim] +[apple] 16:23:43 [fantasai] fantasai: For ::first-line (issue 10), I can split out a lot of the CSS-specific stuff, but I can't figure out a general way to explain that we're referring to the first line of a block 16:23:51 [Zakim] -dsinger 16:23:58 [dsinger] zakim, [apple] has dsinger 16:23:58 [Zakim] +dsinger; got it 16:25:55 [ChrisL] say the host language defines it 16:26:38 [fantasai] fantasai: I'll try to fix that and post my edits to www-style for review 16:26:57 [fantasai] Chris: For Issue 6, refer to XML for the first 3 and to the DOM spec for DOM 16:27:40 [fantasai] Chris: e.g. DOM3 Core 16:27:55 [fantasai] Chris: I have a question about :indeterminate 16:28:20 [fantasai] Chris: The spec has a note about :indeterminate, no normative text, but we have a test in the test suite 16:28:38 [fantasai] Chris: We should either define :indeterminate, or drop the test 16:28:46 [dbaron] Zakim, unmute David_Baron 16:28:46 [Zakim] David_Baron should no longer be muted 16:28:47 [fantasai] and the section 16:28:57 [dbaron] Zakim, mute David_Baron 16:28:57 [Zakim] David_Baron should now be muted 16:29:04 [fantasai] dbaron: Isn't this the result of agreeing to drop it in the past and not having changed the test suite to match? 16:29:22 [fantasai] fantasai: I don't know, I wasn't editing the spec at the time. Daniel? 16:29:44 [plinss] ack ChrisL 16:29:44 [Zakim] ChrisL, you wanted to ask a point of clarification about :indeterminate pseudo-class 16:29:54 [fantasai] Daniel: :indeterminate was not in the original Selectors spec, it was added by Tantek 16:30:15 [fantasai] Chris: I think the tests mostly fail, so I suggest dropping those tests 16:30:36 [fantasai] ACTION fantasai drop :indeterminate from tests 16:30:36 [trackbot] Created ACTION-183 - Drop :indeterminate from tests [on Elika Etemad - due 2009-10-28]. 16:30:49 [fantasai] Chris: Why aren't references split into normative and informative? 16:31:26 [fantasai] fantasai has no clue 16:32:29 [fantasai] fantsai: I can make that split 16:32:40 [fantasai] s/fantsai/fantasai/ 16:34:19 [fantasai] fantasai: I'll have to reword the PCDATA/etc. to not require normative references to HTML/XML 16:34:44 [fantasai] ... 16:35:08 [fantasai] Steve: Is there a section that says what things are language-specific? 16:35:17 [fantasai] Chris quotes section 13 - Conformance 16:35:59 [fantasai] Peter: are all concerns answered? 16:36:23 [dbaron] Zakim, unmute David_Baron 16:36:23 [Zakim] David_Baron should no longer be muted 16:36:44 [dbaron] Zakim, mute David_Baron 16:36:44 [Zakim] David_Baron should now be muted 16:36:51 [fantasai] Chris: So we want to move to CR, how long should it be in CR/ what are status of implementations? 16:36:56 [fantasai] dbaron: Why can't we move to PR? 16:37:02 [dbaron] Zakim, unmute David_Baron 16:37:02 [Zakim] David_Baron should no longer be muted 16:37:07 [dbaron] Zakim, mute David_Baron 16:37:07 [Zakim] David_Baron should now be muted 16:37:14 [fantasai] dbaron: I thought we had already met our CR exit criteria 16:37:20 [dbaron] Zakim, unmute David_Baron 16:37:20 [Zakim] David_Baron should no longer be muted 16:37:37 [dbaron] Zakim, mute David_Baron 16:37:37 [Zakim] David_Baron should now be muted 16:38:01 [dbaron] Zakim, unmute David_Baron 16:38:01 [Zakim] David_Baron should no longer be muted 16:38:07 [dbaron] Zakim, mute David_Baron 16:38:07 [Zakim] David_Baron should now be muted 16:38:15 [dbaron] Zakim, unmute David_Baron 16:38:15 [Zakim] David_Baron should no longer be muted 16:38:26 [fantasai] dbaron: I thought we already had implementation reports 16:38:29 [fantasai] fantasai can't find them 16:39:04 [fantasai] fantasai: If we don't have implementation reports in the next 2 weeks then I think we should move to CR 16:39:15 [dbaron] Zakim, mute David_Baron 16:39:15 [Zakim] David_Baron should now be muted 16:40:08 [fantasai] Bert has links to the implementation reports 16:40:30 [dbaron] 16:40:45 [Bert] FF 1.5: 16:40:56 [Bert] Opera 9.5 16:41:07 [dbaron] Zakim, unmute David_Baron 16:41:07 [Zakim] David_Baron should no longer be muted 16:41:10 [Bert] Konq 3.5 16:41:29 [fantasai] 16:41:34 [Bert] Amaya 9.55 16:41:34 [dbaron] Zakim, unmute David_Baron 16:41:34 [Zakim] David_Baron was not muted, dbaron 16:41:34 [fantasai] 16:41:37 [dbaron] Zakim, mute David_Baron 16:41:37 [Zakim] David_Baron should now be muted 16:41:40 [fantasai] which is linked from the test suite 16:41:57 [fantasai] I haven't dropped the :indeterminate tests yet 16:42:07 [fantasai] I'll do so soon 16:42:21 [fantasai] Chris: The implementation report template is awkward, you have to record the same test multiple places 16:43:43 [fantasai] We will discuss the transition at TPAC 16:43:51 [fantasai] Peter: We'll have implementation reports by then hopefully? 16:44:37 [fantasai] Chris is working on an Opera one 16:44:43 [fantasai] dbaron is working on Mozilla (?) 16:44:56 [fantasai] fantasai: Can we get one for WebKit? 16:45:43 [fantasai] Test Suite: 16:46:40 [fantasai] Peter: Media Queries syntax from Yves 16:46:46 [plinss] 16:47:41 [fantasai] Bert: First issue was about using functional syntax for media queries, not much chance for that 16:47:58 [fantasai] Bert: Second issue was to include the parentheses in the CSS2.1 Appendix G grammar 16:48:02 [dbaron] fantasai, the current implementation report template has one link to the 20090603 test suite, but the individual test links are all to the 20060307 test suit (note year!) 16:48:06 [fantasai] Bert: Because that's what he uses to parse 16:48:10 [dbaron] fantasai, I'm going to assume that's wrong and search-replace the whole thing before running it 16:48:29 [fantasai] dbaron, yeah, I guess I have to fix that 16:48:44 [fantasai] Bert: I'm not in favor. It's not wrong to add it, but it's not necessary 16:49:16 [dbaron] Zakim, unmute David_Baron 16:49:16 [Zakim] David_Baron should no longer be muted 16:49:20 [fantasai] fantasai: If Yves needs it for his implementation, then he can add it. We don't need it for the spec, so we dont' need to edit the spec 16:49:43 [fantasai] dbaron: The Appendix G grammar is only a description of what's in CSS2.1, it's not a guide for implementations 16:49:45 [dbaron] Zakim, mute David_Baron 16:49:45 [Zakim] David_Baron should now be muted 16:50:01 [fantasai] dbaron: It's sometimes useful where the spec fails to give a grammar elsewhere, but we should really try to eliminate those 16:50:55 [fantasai] Bert describes what Media Queries does to the grammar 16:50:59 [fantasai] Bert: I'm happy to do nothing here 16:51:22 [fantasai] RESOLVED: Closed no change 16:52:05 [plinss] Topic: @page and the CSSOM 16:52:07 [plinss] 16:52:35 [fantasai] fantasai: I have no idea, but the @page rule with its nesting structure can't be changed at this point 16:52:42 [fantasai] fantasai: Also we have nesting for @media... 16:53:53 [fantasai] 16:54:21 [fantasai] Chris: I'm concerned that we would replace the explosion of properties with the explosion of pseudo-elements 16:54:41 [fantasai] Tab: There was some discussion of this with Brad and roc 16:54:49 [fantasai] Tab: We don't want the combinatorial explosion 16:55:00 [fantasai] Tab: But other ways of naively doing it break the CSS model for values 16:55:12 [fantasai] Daniel: Microsoft introduced filters long long ago 16:55:33 [fantasai] Daniel: It would be cool to provide a replacement for this proprietary feature 16:55:38 [dbaron] fantasai, I'm going to prepare a revised implementreportTEMPLATE.html file 16:56:26 [fantasai] Chris: SVG filters is being split out into a separate file, to allow other specs to incorporate them 16:57:28 [fantasai] ... 16:57:42 [fantasai] Daniel: After transitions, animations, and transforms filters will be the next feature requested by web designers 16:58:05 [fantasai] Tab: So hash out more of this on the list? I don't think this is something we can resolve on the call 16:58:35 [fantasai] Tab: It's a big issue. No matter what we do, it will be something large and complicated to work out 16:58:47 [fantasai] Peter: Any benefit to working F2F at TPAC? 16:58:59 [fantasai] Daniel: If we follow Chris's suggestion, it would be something to discuss with SVG 16:59:16 [fantasai] fantasai: SVG won't be meeting at TPAC 16:59:24 [fantasai] Chris: Some members will be present, but not the group 16:59:45 [fantasai] Chris: Erik would be the best person to talk to, since he's editor for Filters spec 17:00:00 [fantasai] Deferred to TPAC and beyond 17:00:57 [fantasai] Meeting closed. 17:00:59 [Zakim] -[Microsoft] 17:01:01 [Zakim] -SteveZ 17:01:02 [Zakim] -ChrisL 17:01:02 [Zakim] -TabAtkins 17:01:03 [Zakim] -glazou 17:01:03 [Zakim] -plinss 17:01:05 [Zakim] -[apple] 17:01:06 [Zakim] -CesarAcebal 17:01:07 [Zakim] -David_Baron 17:01:12 [Zakim] -Bert 17:01:14 [Zakim] -fantasai 17:01:14 [Zakim] Style_CSS FP()12:00PM has ended 17:01:15 [Zakim] Attendees were dsinger, plinss, fantasai, TabAtkins, sylvaing, glazou, David_Baron, Bert, ChrisL, SteveZ, CesarAcebal 17:01:34 [fantasai] dbaron, cool, can you check that into a reports/ subdirectory of and I'll copy to w3.org? 17:02:07 [glazou] glazou has left #css 17:06:18 [dbaron] fantasai, I sent it to 17:06:28 [dbaron] fantasai, so you can check it in whereever else... 17:06:45 [fantasai] ok 17:09:26 [myakura_] myakura_ has joined #css 17:21:03 [dbaron] fantasai, actually, I found a few more mistakes 17:21:11 [dbaron] fantasai, a few tests that have been marked HTML-only, I think 17:53:19 [bradk] bradk has joined #css 17:54:43 [bradk_] bradk_ has joined #css 17:55:12 [bradk_] What'd I miss? 17:57:51 [bradk] bradk has joined #css 18:03:27 [dbaron] fantasai, maybe the reports should just go in CSS/selectors3-test-suite on dev.w3.org ? 18:03:43 [dbaron] just like we did for the css3-color test suite reports? 18:03:43 [sylvaing] sylvaing has joined #css 18:04:20 [dbaron] fantasai, or is that no longer a current copy of the test suite? 19:08:20 [Zakim] Zakim has left #CSS 20:08:55 [sylvaing] sylvaing has joined #css 20:32:54 [fantasai] dbaron: Arron and I shifted all the test stuff to test.csswg.org 20:33:04 [fantasai] dbaron: Much less confusion and version history munging that way 20:33:16 [dbaron] fantasai, Anything that's no longer the main copy should be deleted 20:33:23 [fantasai] dbaron: already done 20:33:29 [dbaron] fantasai, otherwise I'll just keep editing whatever's on dev.w3.org 20:33:33 [fantasai] dbaron: dev.w3.org/CSS should be empty 20:34:03 [dbaron] ah, yeah, mostly 20:34:13 [fantasai] dbaron: Couldn't delete the directories 20:34:16 [fantasai] dbaron: but they're all empty 20:34:27 [dbaron] fantasai, if directories are empty they go away if you give cvs the right option 20:34:32 [dbaron] fantasai, you should never cvs remove a directory 20:34:48 [dbaron] fantasai, your ~/.cvsrc should have: 20:34:51 [dbaron] 20:34:52 [dbaron] update -P 20:35:14 [fantasai] k 20:35:29 [dbaron] so I should make a 'reports' subdir of approved/selectors3/ ? 20:35:33 [fantasai] yes 20:35:49 [fantasai] feel free to make any edits you feel are necessary there 20:36:05 [dbaron] boy, every time I update csswg.org svn, it seems like massive chunks of files I already have get moved somewhere else 20:36:06 [fantasai] and I'll remove the :indeterminate tests and republish the test suite on Friday 20:36:15 [fantasai] heh 20:36:16 [fantasai] well 20:36:20 [fantasai] we reorganized the repository 20:36:23 [fantasai] a few weeks ago 20:36:42 [fantasai] I think we'll be staying with this structure for awhile 20:36:57 [fantasai] it's easier for contributors 20:39:26 [dbaron] fantasai, what's my username for svn.csswg.org? 20:40:07 [dbaron] fantasai, and do I need to use a URL beginning with something other than http: to commit? 20:40:44 [dbaron] (I think I may have had my client configured properly for the previous location of the repository, but I deleted that tree...) 20:42:22 [fantasai] dbaron, and no 20:42:33 [dbaron] fantasai, "and no"? 20:42:44 [fantasai] https should be fine 20:43:05 [dbaron] fantasai, what's my username? 20:43:15 [fantasai] dbaron 20:43:29 [dbaron] and how do I authenticate? 20:43:43 [fantasai] you should have the password in your inbox 20:43:59 [fantasai] I can send it to you again if you can't find it 20:44:04 [fantasai] but all the info should be in that email 20:44:19 [dbaron] do I need to check the whole tree out again if I currently have it from http: rather than https: ? 20:44:32 [fantasai] I don't know enough about SVN to answer that 20:44:53 [dbaron] fantasai, what year was this email sent in? 20:45:05 [dbaron] fantasai, and how might I search for it? 20:46:29 [dbaron] https: doesn't seem to work 20:47:16 [dbaron] ok, I found the password 20:47:20 [dbaron] still don't know how to use https: 20:47:28 [dbaron] 20:47:31 [dbaron] Date: Wed, 03 Sep 2008 16:11:28 -0700 20:48:25 [dbaron] doesn't look like that server responds to https:, at least not on the normal port 20:48:51 [fantasai] hm, maybe it's just http then 20:49:33 [fantasai] since that's what I sent, then that's probably what you need to log in 20:50:44 [dbaron] fantasai, what do you see when you type "svn info" in your checkout of that repository? 20:50:51 [dbaron] fantasai, http: or something else? 20:51:15 [fantasai] http 20:51:24 [dbaron] nice to know this is completely insecure... 20:51:41 [fantasai] plinss: ping 20:58:47 [dbaron] fantasai, ok, reports checked in 20:59:15 [fantasai] dbaron: cool, thanks 21:01:08 [fantasai] dbaron: btw, I wanted to get hixie's tests checked in this week 21:01:41 [fantasai] dbaron: but he told me you were in charge of that, or something 21:01:45 [fantasai] dbaron: 21:01:58 [fantasai] dbaron: So I wanted to know what I should do. 21:01:58 [dbaron] he wanted me to be, but I'm not 21:02:18 [fantasai] dbaron: so it's ok if I just check them in? 21:02:24 [dbaron] if he says it is, yes 21:02:26 [dbaron] which I thought he did 21:02:29 [fantasai] ok 21:02:31 [fantasai] I'll do that then 21:02:32 [fantasai] thanks 22:49:42 [arronei] arronei has joined #CSS 23:02:44 [bradk] bradk has joined #css 23:20:36 [MikeSmith] MikeSmith has joined #css
http://www.w3.org/2009/10/21-CSS-irc
CC-MAIN-2017-04
refinedweb
3,620
51.04
IRC log of tagmem on 2003-07-28 Timestamps are in UTC. 18:56:32 [RRSAgent] RRSAgent has joined #tagmem 18:57:18 [Zakim] TAG_Weekly()2:30PM has now started 18:57:19 [Zakim] +Chris 18:58:34 [Zakim] +??P0 18:59:04 [Zakim] +??P1 18:59:09 [Ian] Ian has joined #tagmem 18:59:11 [TBray] TBray has joined #tagmem 18:59:21 [Zakim] +Norm 18:59:44 [Zakim] +TimBL 18:59:56 [Zakim] +??P2 18:59:57 [Zakim] -Norm 19:00:27 [Zakim] +Norm 19:00:39 [Zakim] -Norm 19:00:43 [Ian] zakim, call Ian-BOS 19:00:43 [Zakim] ok, Ian; the call is being made 19:00:44 [Zakim] +Ian 19:01:02 [Zakim] +Norm 19:01:32 [Zakim] -Norm 19:01:33 [Chris] zakim, mute me 19:01:33 [Zakim] Chris should now be muted 19:01:38 [Chris] is that better? 19:01:52 [Chris] sometimes there is echo if zakim dials me 19:02:26 [Zakim] +Norm 19:03:10 [Norm] zakim, who's here? 19:03:10 [Zakim] On the phone I see Chris (muted), ??P0, ??P1, TimBL, ??P2, Ian, Norm 19:03:11 [Zakim] On IRC I see TBray, Ian, RRSAgent, DanC_g, Zakim, Norm, Chris, timbl 19:03:31 [Norm] zakim, ??P0 is Paul 19:03:31 [Zakim] +Paul; got it 19:03:35 [Norm] zakim, ??P1 is PatHayes 19:03:35 [Zakim] +PatHayes; got it 19:03:36 [Zakim] +DOrchard 19:03:41 [Norm] zakim, ??P2 is TBray 19:03:41 [Zakim] +TBray; got it 19:03:47 [Norm] zakim, who's here? 19:03:47 [Zakim] On the phone I see Chris (muted), Paul, PatHayes, TimBL, TBray, Ian, Norm, DOrchard 19:03:49 [Zakim] On IRC I see TBray, Ian, RRSAgent, DanC_g, Zakim, Norm, Chris, timbl 19:04:03 [Ian] zakim, who's here? 19:04:03 [Zakim] On the phone I see Chris (muted), Paul, PatHayes, TimBL, TBray, Ian, Norm, DOrchard 19:04:05 [Zakim] On IRC I see TBray, Ian, RRSAgent, DanC_g, Zakim, Norm, Chris, timbl 19:04:13 [Ian] Regrets: SW 19:04:59 [Zakim] +DanC 19:05:42 [Ian] Scribe: IJ 19:05:43 [Ian] Chair: NW 19:05:47 [Ian] Welcome Pat Hayes! 19:06:03 [Ian] Agenda: 19:06:21 [DanC_g] regrets 4 Aug 19:06:23 [Ian] Next meeting: 4 Aug? 19:06:36 [Ian] DO: Regrets 19:06:40 [Ian] q+ 19:07:00 [Ian] FTF meeting in October? 19:07:13 [Ian] Proposal: Meet 6-7 Oct in Bristol 19:07:16 [DanC_g] there were 2 actions for ftf offers, no? 19:07:34 [Chris] yes, we should 19:08:03 [Ian] Question: Should we meet ftf in October? 19:08:07 [Norm] DanC_g: yes, but mine and stuarts got mangled together. his is the only one left on the table 19:08:14 [DanC_g] bummer 19:08:15 [Ian] TBL: It would be expensive to me to go to a ftf. 19:08:58 [Ian] TBL: What about another extended remote meeting? 19:09:06 [Ian] DO: I'd prefer to meet ftf, but can live with long virtual meeting. 19:09:14 [Ian] TBray: I think long remote meeting was painful but worked. 19:09:18 [Ian] CL: I'd prefer to meet ftf. 19:09:22 [Ian] PC: I prefer ftf. 19:09:28 [Ian] IJ: I can do either. 19:09:46 [Chris] we can get a lot done in two days of concentrated effort 19:10:28 [Ian] PC: I'm skeptical that we will be able to reach agreement on text regarding extensibility. 19:10:38 [Ian] PC: Like DO, I think we should get doc to last call before Nov 2003 AC meeting. 19:11:23 [Ian] DC: I can't comfortably make a ftf meeting. 19:11:37 [Ian] NW: I think we are more likely to get to last call this year if we have an Oct ftf meeting. 19:11:39 [TBray] If we decide to do it, I'll go 19:11:52 [DanC_g] Chris seems to be the source of an echo 19:13:01 [Ian] DO: Should a subset meet? 19:13:03 [DanC_g] Chris, pls mute yourself or something 19:13:18 [Zakim] -Chris 19:13:36 [Ian] TBray: North America would reduce the pain for me (even if unfair, which I recognize). 19:13:56 [Zakim] + +1.334.933.aaaa 19:14:13 [Chris] yes, uits unfair 19:15:10 [Ian] NW: As it stands now, if we assume that SW is willing to host, it appears we have 5-6 people willing to meet in Bristol. TBL/DC are tentative no's. RF is unknown. 19:15:15 [TBray] yes, seems better chris 19:15:25 [Ian] NW: It would seem that there is tentative consensus to meet in Bristol. 19:15:28 [Ian] TBL: What about a video link? 19:15:40 [Ian] TBL: E.g., between a UK site and an east coast site. 19:15:52 [Ian] Ian has left #tagmem 19:16:02 [Ian] Ian has joined #tagmem 19:16:39 [Norm] proposal is a f2f with a block of time for telcon/vidcon 19:16:54 [Ian] CL: As long as there is good chairing two remote pieces, allow remote call-in for a few hours per day. 19:17:31 [TBray] +1 19:17:34 [Ian] NW Proposes: (1) ftf meeting in Bristol (2) count on video/telephone remote participation for important issues to those individuals who cannot attend. 19:17:47 [Chris] prefer a three day 19:17:50 [Ian] PC: Why meet on Monday/Tuesday? 19:18:07 [Ian] NW: I did not mean to constrain to 2 days. There are hard constraints to return before end of week. 19:18:13 [Ian] NW: We can meet 3 days. 19:18:16 [Ian] CL: 3 days ok for me. 19:18:38 [Ian] CL: I can still fly home in evening (direct Bristol -> Nice) so easy for me. 19:18:54 [Ian] TBL: Travel on Sunday not family-friendly. 19:19:04 [Ian] PC, DO: We were planning to give up Sunday. 19:19:14 [Chris] norm, I suggest cutting this discussion at now+5 minutes 19:19:16 [Ian] PC: Next weekend is even more important (Canada Thanksgiving, eh) 19:19:25 [Chris] and moving on to the technical material 19:19:33 [Chris] and stop boring our guest ;-) 19:19:43 [Ian] Proposed: Meet in Bristol M-W with remote participation. 19:20:10 [Ian] DC: Please accept my regrets (not sure I can be there in any form). 19:20:39 [Ian] Resolved: Meet in Bristol M-W (6-8 Oct 2003) with remote participation. 19:20:49 [Ian] TBL: Please maximize remote participation times. 19:21:06 [Ian] NW: Depends on SW's facilities somewhat. 19:21:16 [Ian] === 19:21:17 [Ian] q? 19:21:20 [Ian] q- 19:21:35 [Ian] For next week: Review ftf minutes 19:21:42 [Ian] 19:21:43 [Chris] we could start late (10) and finish late (7, at least) to make remote participation from westwards folsk easier 19:21:45 [Ian] --- 19:21:46 [DanC_g] I looked at the draft minutes; they're *not* OK as is. Sorry I haven't found time to send details 19:21:51 [TBray] q+ 19:22:26 [Ian] NW: For technical agenda, we can either talk about (1) httpRange-14 or (2) define resource and representation. 19:23:07 [Ian] DO: My proxy vote goes to TB. 19:23:13 [Norm] q? 19:23:14 [Chris] q? 19:23:15 [DanC_g] ack tbray 19:24:07 [timbl] q+ to mention that Roy had views on this and to proceede without him is a shame. 19:24:33 [Ian] TBray: Procedural point: If we write up text along the lines we agreed to at the ftf meeting (information resource mentioned), I think several of us can live with that compromise. 19:24:46 [Ian] TBray: We can address the issue(s) in detail in a subsequent version of the arch doc. 19:24:46 [DanC_g] ack danc 19:24:46 [Zakim] DanC_g, you wanted to note some input from Roy 19:24:47 [Norm] ack DanC_g 19:24:57 [Zakim] -DOrchard 19:25:15 [Chris] I agree that information resource really helps as a concept 19:25:16 [Ian] DC: On "information resources": Roy said there was no such thing. We did not *decide* to make the distinction at the meeting. 19:25:16 [Norm] q? 19:25:26 [Ian] DC: I think RF is on the record as saying we should not make the distinction. 19:25:41 [Norm] ack timbl 19:25:41 [Zakim] timbl, you wanted to mention that Roy had views on this and to proceede without him is a shame. 19:25:41 [DanC_g] ack timbl 19:25:45 [Ian] TBL: Right, I think RF opposed the notion of information resource. 19:26:29 [TBray] q+ 19:27:01 [Ian] TBL: RF's absence today is a shame. We invited Pat Hayes (PH) to discuss the use of terms. I think RF might agree that in practice there are information resources, but he would not like to make the distinction in the model. 19:27:45 [TBray] q= 19:27:48 [TBray] q= 19:27:51 [TBray] q= 19:27:54 [Ian] PH: Please explain RF's position. Is the position that there is no such thing as an information resource, or that the distinction is not useful? 19:27:55 [TBray] q+ 19:27:58 [DanC_g] ack tbray 19:28:19 [Ian] TBray: I think I can convey RF's position. RF and I both observe that the existing deployed base of software has no opinion about what the nature of a resource is. 19:28:38 [Ian] TBray: Deployed software doesn't care whether the resource is a mountain or a picture of a mountain. 19:28:52 [Ian] TBray: The distinction has nothing to do with respresentational state transfer. 19:29:06 [Ian] TBray: While I agree with him technically, I am aware of the angst caused by the issue. 19:29:47 [Ian] DC: In particular, RF has pointing out that http URIs (without #fragid) exist in practice that refer to robots (not information resources). 19:29:52 [TBray] q+ 19:30:07 [Ian] TBray: Another example is XML namespace URIs that begin with http. 19:30:16 [DanC_g] ... and have no #s 19:30:46 [Ian] PH: Seems like XML Namespace URIs are a good example of URIs that (can) have nothing at the end. It's hard to get ahold of the namespace. You get documents back saying "I am a namespace." 19:31:06 [timbl] q+ to explain NS 19:31:27 [Ian] PC: Don't forget use of Namespace URIs as declared without making available any representations. 19:31:51 [timbl] q+ also to menation Roy's model of all the bits as being representatation of the robot. 19:32:16 [Ian] PH: Seems frequent to have URIs without representations available; no need to make this illegal. 19:32:47 [timbl] q+ to also mention Roy's model of all the bits as being representatation of the robot. 19:33:00 [Ian] PH: When you get persnickety about nature of resource, you continue to find ambiguities (e.g., resource at given moment in time v. resource at any moment time). 19:33:03 [DanC_g] ack tbray 19:33:04 [Norm] ack TBray 19:33:28 [Ian] TBray: Suppose we proceed in document by making distinction between information resource and "other types" of resources. 19:33:44 [Ian] TBray: TBL has said that ambiguous denotation with URIs is dangerous to sem web. 19:34:06 [Ian] TBray: What would need to be said in arch doc to make building sem web sanely possible? 19:34:38 [Ian] PH: What bothers me is that there is the axiom on the current draft: The claim that a URI must *identify* a unique resource. 19:34:44 [Ian] TBray: What do you mean by "unique"? 19:35:00 [Ian] PH: If the axiom could be weakened or removed, a lot of these problems would just go away. 19:35:19 [Chris] q+ tbray to ask in what sense it is unique 19:35:37 [Ian] TBL: There's a philosophical debate issue (denotations and interpretations). 19:35:37 [TBray] q+ to ask where the assertion Pat talks about is made 19:35:56 [Ian] TBL: But there are practical problems when someone wants to use a URI to refer to a page and also to a person. 19:36:03 [Ian] TBL: These people haven't been playing with the semantic web. 19:36:07 [DanC_g] ack timbl 19:36:07 [Zakim] timbl, you wanted to explain NS and to also mention Roy's model of all the bits as being representatation of the robot. 19:37:27 [Ian] TBL: There's a philosophy question (how do we determine we mean the same thing when using URIs). But there's another thing (hair-splitting) about whether we mean a photo or a photo including its frame. I'm worried about neither of these (for the moment). I am concerned when people are expressly referring to two things with the same URI. 19:37:29 [Norm] q+ to ask how to distinguish between the nits and the real distinctions 19:37:40 [Ian] PH: For the purposes of today's discussion, I agree with TBL. 19:37:51 [Ian] (PH: But I don't actually agree with TBL) 19:38:10 [Ian] PH: I agree that the current technology doesn't care what the nature of the resources is. 19:38:14 [timbl] Pat: Current technology not on teh sem web doesn't give a rat what these resources really are. 19:38:25 [Norm] ack tbray 19:38:25 [Zakim] tbray, you wanted to ask in what sense it is unique and to ask where the assertion Pat talks about is made 19:38:29 [Ian] PH: The problem is what's said in the arch doc. The document says important about resources that matter. 19:38:31 [Norm] ack also 19:38:31 [Zakim] also, you wanted to menation Roy's model of all the bits as being representatation of the robot. 19:38:34 [Ian] TBray: What language is bothering you. 19:38:49 [Ian] 19:39:13 [Ian] 2. Identification and Resources 19:39:23 [Ian] glob 19:39:23 [Ian] al naming in Web Architecture." 19:39:58 [Ian] [TB reads second para as well] 19:40:10 [Norm] q? 19:40:16 [DanC_g] (is the relevant draft cited from the agenda?) 19:40:30 [Norm] no, my bad 19:41:02 [Ian] PH: I don't establish a link to a galaxy by using a URI. 19:41:07 [Ian] PH: Let's define "link" 19:41:08 [Ian] q? 19:41:14 [Chris] link ix a context of use of uris 19:41:26 [Norm] ack Norm 19:41:26 [Zakim] Norm, you wanted to ask how to distinguish between the nits and the real distinctions 19:41:27 [timbl] q+ to mention confusion betweb Rs and IRs 19:41:28 [Norm] norm agrees to pass 19:41:34 [Norm] ack pathayes 19:41:47 [Ian] PH: URI-makes-link if we think about resources as being networked resources. 19:41:50 [Chris] q+ to point out links are only found in resource representations 19:42:06 [Ian] TBray: I'm sorry, I just don't see the problem. 19:42:11 [TBray] q+ 19:42:45 [Ian] PH: How do you link from an imaginary entity to something 100s of 1000s of light years away. 19:42:51 [Norm] ack timbl 19:42:51 [Zakim] timbl, you wanted to mention confusion betweb Rs and IRs 19:42:51 [Ian] PH: You can link the representations, but not the things. 19:42:58 [Chris] PH just said what I was quesd up to say!! 19:43:02 [TBray] q- 19:43:12 [Norm] ack chris 19:43:12 [Zakim] Chris, you wanted to point out links are only found in resource representations 19:43:43 [Ian] DC: The doc says "When a REPRESENTATION of one resource..." 19:43:57 [Ian] q? 19:44:09 [Ian] CL: You can only have a link from a representation. 19:44:15 [Ian] CL: The link is to a resource, not a representation. 19:44:26 [Ian] (CL: Modulo fragid nonsense.) 19:44:48 [Ian] CL: One knows about links by fetching representations and determining that there's a link. 19:45:15 [Ian] CL: A link IS formed between resources; the link is accomplished via representations. 19:45:39 [Chris] a link IS NOT* fomed merely by the existence of two resources 19:45:59 [Chris] a link has to be explicitly established, in a representation 19:46:06 [Chris] not all representations have links 19:46:10 [Ian] PH: "shared set of bindings". Can we assume that looking at this from a sem web that "parties" can be software agents? 19:46:11 [Ian] DC: Yes. 19:46:24 [Ian] PH: So how do software agents establish a shared vocabulary? 19:46:35 [Chris] eg I can have an image in SVG that has links, and a JPEG image that does not 19:46:40 [Ian] DC: The document doesn't say that they "have to", we just observe that they do. 19:46:47 [Chris] and those could be two representations of the same resource 19:46:53 [Ian] TBL: Software agents pick up knowledge by being written by humans. 19:46:59 [timbl] "The Astronomer" 19:47:09 [Ian] TBray: It's safe to assume (by software agents) that same URI refers to same thing. 19:47:22 [Ian] DC: Both names and what they refer to is bootstrapped. 19:47:33 [Ian] PH: There's no way to communicate "the thing". You can only refer to it with symbols. 19:47:40 [Ian] DC: That's exactly what we do. 19:48:00 [Ian] PH: It works between people in a room because they all see the dog and observe understanding. 19:48:12 [Ian] TBray: Why doesn't it work on the Web. 19:48:27 [TBray] q+ 19:48:29 [Ian] PH: Vocab is defined in terms of bindings, not shared URIs. I don't think that's true. 19:48:31 [timbl] q+ 19:48:53 [Ian] PH: Software can do a lot without knowing bindings. It doesn't matter in some cases whether there is even a binding. 19:49:06 [Ian] PH: Only agreement is the agreement to use URIs in the same way (in a given context). 19:49:08 [Norm] ack tbray 19:49:29 [Ian] TBray: I agree with PH here - I think the discussion of "shared set of bindings" is gratuitous; we never actually define the bindings. 19:49:43 [Ian] TBray: We could delete that phrase; we don't need to talk about bindings at this point in the doc. 19:50:09 [Ian] DC: But names refer to something. There is tangible value when our views of binding are the same. 19:50:18 [Norm] ack timbl 19:50:19 [DanC_g] "a shared set of identifiers on whose meanings they agree" 19:50:21 [DanC_g] I like that 19:50:38 [TBray] +1 19:51:38 [Chris] I observe that in real life, subsets of communities can agree on the meaning of a given term, but entire communities rarely ever do. hence schools of thought, different factions, political parties, and so on. a canonical set of definitions only goes so far 19:51:43 [Ian] TBL to PH: On this phone call, we say "Pat's on the queue." Pat is animate, the queue is virtual. But there's no confusion about these things. We've exchanged a huge amount of information, and it would be inconceivable to be confused about what "Pat" means. A vast number of URIs will work that way on the semantic web. 19:51:55 [Ian] TBL: E.g., those published by the OWL WG. 19:52:41 [Ian] [TBL on cost in time of continuing to debate fine points.] 19:52:55 [Norm] ack DanC_g 19:52:55 [Zakim] DanC_g, you wanted to reiterate pat's proposal 19:52:59 [timbl] q+ 19:52:59 [Ian] [PH proposal: "establish a shared set of identifiers on whose meanings they agree."] 19:53:25 [Ian] PC: Do we define "identifiers"? 19:53:33 [Ian] TBray: I don't think we need to define "identifiers" 19:53:38 [Ian] DC: It's clear that we mean URIs. 19:54:29 [timbl] 19:54:35 [Ian] Proposed: s/a shared set of bindings between identifiers and things/a shared set of identifiers on whose meanings they agree 19:54:58 [Ian] NW: If we adopt this, does this help clarify what we mean by resources/respresentations? 19:55:29 [Ian] Resolved: In section 2, s/a shared set of bindings between identifiers and things/a shared set of identifiers on whose meanings they agree 19:55:32 [Ian] --- 19:56:07 [TBray] oops, Zakim is confused 19:56:08 [TBray] q- 19:56:14 [TBray] q+ Pat 19:56:32 [Norm] ack timbl 19:57:31 [Ian] [Second issue is on information resources.] 19:57:37 [Ian] ack Pat 19:57:43 [Ian] q+ Pat 19:57:53 [Ian] ack Pat 19:58:06 [Ian] PH: "The networked information system is built of linked resources, and the large-scale effect is a shared information space. The value of the Web grows exponentially as a function of the number of linked resources (the "network effect")." 19:58:09 [Ian] PH: Whoa. 19:58:23 [Ian] PH: This seems to be talking about information resources. 19:58:24 [Ian] DC: I agree. 19:58:27 [Ian] TBray: I don't,. 19:58:53 [Norm] q+ 19:59:02 [Ian] TBL: In my terminology, you have a picture/form of a robot; those are information-bearing objects. 19:59:09 [Ian] TBray: Software can't tell the difference. 19:59:13 [Ian] TBL: My software can. 20:00:03 [Ian] [On meaning of "link"] 20:00:36 [timbl] q+ to discuss what < > is. 20:00:40 [Norm] akc norm 20:00:42 [Norm] ack norm 20:02:14 [Norm] Stupid is not illegal. 20:02:26 [Norm] q? 20:02:37 [TBray] q+ to say that the information system includes only those things for which people publish URIs, and they're only good citizens if they make representations available 20:02:40 [DanC_g] but harmful can be, and perhaps should be, promoted to counter-to-web-architecture 20:02:48 [timbl] q+ paul 20:02:52 [Norm] ack timbl 20:02:52 [Zakim] timbl, you wanted to discuss what < > is. 20:03:25 [Norm] q+ paul 20:04:14 [Ian] Ian has joined #tagmem 20:04:20 [Ian_] Ian_ has joined #tagmem 20:04:55 [Ian_] zakim, drop Ian 20:04:55 [Zakim] Ian is being disconnected 20:04:56 [Zakim] -Ian 20:04:57 [Ian_] zakim, call Ian-BOS 20:04:57 [Zakim] ok, Ian_; the call is being made 20:04:58 [Zakim] +Ian 20:05:25 [Ian_] TBL: Web works because we have expectations about the same content. 20:05:37 [Ian_] NW: What if you do a GET on a URI and get back an RDF representation that says "That URI refers to a person." 20:05:47 [DanC_g] I think this "expectation of same content" issue is much more subtle... doesn't work for W3C home page, for example, which has different information on different days. 20:05:55 [Ian_] TBL: The system would stop. "I'm sorry, I told you that the URI refers to a person; not a painting." 20:06:12 [Ian_] NW: If I own a URI, I don't know why I don't get to say definitively what it refers to. 20:06:56 [Ian_] TBL: It's useful to be able to limit scope to information resources rather than have to call up a person to ask what a URI refers to. 20:06:56 [DanC_g] we can, and we do go thru doing just that. 20:07:21 [Ian_] TBL :The Web is built of networked information objects. The identity of those things is defined by what is invariant when you do GET with that URI. 20:07:41 [timbl] and what is invariant is that that is a picture of an oil painting. 20:07:45 [Ian_] PH: If you say that what the URI denotes is fixed by the owner, then any URI can denote anything. 20:07:47 [Ian_] DC, CL: Yes. 20:08:00 [Chris] yes, and that would be useless, but is still possible 20:08:01 [Ian_] PH: I don't think that's feasible as a network architecture. 20:08:03 [DanC_g] (I wasn't among those who sayd "yes" there) 20:08:08 [DanC_g] ack danc 20:08:08 [Zakim] DanC_g, you wanted to state my position on httpRange-14: I don't think we shoulld resolve it in this version. I think we should make and use the distinction between resources in 20:08:08 [Norm] ack DanC_g 20:08:11 [Zakim] ... general and information resources; i.e. those resources that can have representations. 20:08:24 [Ian_] DC: On this issue, I don't think that we should resolve it entirely in v1 of arch doc. 20:08:35 [Ian_] DC: There's a lot of work to be done before we do. 20:08:43 [timbl] PH: If it were really true that yo had to ask someone what their URI meant, the web would not work. It isnt a working network architecture 20:08:55 [Ian_] DC: But I do think it's useful to make the distinction between information resources and other resources. It will help the community talk about the problem. 20:09:29 [timbl] seconded 20:09:36 [Norm] q? 20:09:38 [Norm] ack tbray 20:09:38 [Zakim] TBray, you wanted to say that the information system includes only those things for which people publish URIs, and they're only good citizens if they make representations available 20:09:56 [Norm] q+ to ask who's going to write the new text 20:10:39 [TBray] sorry 20:12:33 [Ian_] Ian_ has joined #tagmem 20:12:37 [Ian] Ian has joined #tagmem 20:13:37 [Ian] PH: I don't know what it means to build a networked ifnormation system with galaxies. 20:13:37 [Ian] DC: Is it useful to make a decision about adding "information resource" without RF here? 20:13:37 [Ian] NW: I'd like to move forward even if RF's not here. He can object. 20:14:06 [Ian] Action DC: Propose text for architecture document that distinguishes "information resource" from other types of "resources". 20:14:10 [Norm] ack norm 20:14:10 [Zakim] Norm, you wanted to ask who's going to write the new text 20:14:38 [Ian] DC: I'd like to resolve to include such language. 20:15:01 [Ian] PC: No chance. 20:15:12 [Ian] [Others may have said no as well] 20:15:25 [Norm] ack paul 20:15:34 [Ian] DC: I don't accept the action if we are not deciding. 20:15:41 [Norm] q? 20:15:41 [Ian] Action TB: Propose text for architecture document that distinguishes "information resource" from other types of "resources". 20:16:14 [Ian] --- 20:16:22 [Ian] httpRange-14 20:16:31 [Ian] NW: I fear that we simply disagree. What's the best way to frame the discussion that will be constructive? 20:16:35 [Ian] DC: I move to adjourn. 20:17:09 [Ian] NW: Several people wrote back and said that my summary was flatly wrong. 20:17:17 [timbl] Uniform Resource Identifiers (URI), defined by "Uniform Resource Identifiers (URI): Generic Syntax" [URI], are central to Web Architecture. Identifier here is used here in the sense of name. Parties who wish to communicate about something will establish a shared vocabulary, i.e. a shared set of bindings between identifiers and things. This shared vocabulary has a tangible value: it reduces the cost of communication. The ability to use common identifiers across 20:17:18 [Ian] PH floats an idea. 20:17:18 [timbl] URIs identify resources. A resource can be be anything. Certain resources are information resources, which convey information. These are termed information resources. Much of this document discusses information resources, often using the term resource. 20:17:20 [timbl] An information resource is on the Web when it can be accessed in practice. 20:17:22 [timbl] When a representation of one information resource refers to another information"). 20:17:41 [Ian] PH: Perhaps question producing more contention than it needs to. 20:18:16 [Ian] PH: If an http URI is used with "#fragid", then, it should be that the URI before the "#" SHOULD denote an information resource. 20:18:18 [Chris] oh, magic hashes again 20:18:47 [Ian] PH: This gives an opening to folks like Patrick Stickler. But the point is not to have one's cake and eat it too. 20:19:21 [DanC_g] ack danc 20:19:21 [Zakim] DanC_g, you wanted to doubt that hayes's suggestion helps... I'm pretty sure roy thinks robot#topPart works while robot refers to a non-document 20:19:39 [Ian] DC: I don't think this suggestion helps. RF might say that <URI#top> refers to top part of robot and <URI> refers to entire robot. 20:20:04 [Ian] [TB notes that MIME type doesn't say anything about resource, just type of representation] 20:20:18 [Chris] its not clear that resources even have a type 20:20:34 [DanC_g] not in general, no, chris. 20:20:44 [Ian] q? 20:20:48 [TBray] It's pretty clear that they *don't* have a "type" absent external assertions e.g. in RDF 20:21:47 [timbl] paulcottpon/name/foobar 20:21:50 [Ian] PC: Imagine I had an XML namespace (paulcotton.name/foobar) describing a bunch of things, and I don't make available a namespace document. And I want to refer to subpieces of the namespace. I can do something like #part1, #part2 to refer to pieces. That seems to make something illegal that folks are already doing today. 20:22:00 [TBray] paulcotton.name/foobar#2 20:22:08 [Ian] PC: I use /foobar#part1, /foobar#part2.... 20:22:34 [Ian] PC: I hear PH saying that if I use "#fragid" then there had better be a document available even when the fragid is stripped. 20:22:38 [Ian] PH: Yes, I was saying that. 20:22:40 [Chris] is 'a document' the same as 'an information resource representation' 20:23:07 [TBray] don't think so, Chris 20:23:20 [TBray] I think a document is a representation 20:23:24 [Norm] q? 20:23:47 [Ian] TBL: The question is whether I can use a URI to refer to a painting, or what magic I have to do to figure out whether the URI refers to a painting or an information object that refers to it. 20:24:44 [Ian] TBL: I'd like to be able to refer to an invoice for a robot, and be sure that someone else doesn't use the URI to get the sound of the robot hitting the floor. 20:24:54 [Ian] NW: That might happen; there's nothing that can be done about it. 20:24:56 [TBray] NW: shit happens 20:25:20 [Ian] TBL: But that case is broken. People shouldn't do that. It's damaging. 20:25:27 [Ian] TBray: We have language to that effect (on ambiguity). 20:25:44 [Ian] TBL: I want language that says that if you use a URI that refers to a picture and to a person, that that's wrong. 20:26:03 [Ian] NW: I agree that that's wrong. But I can't swallow assertions related to URIs "with #". 20:26:16 [Chris] 'wrong' and 'inconsistent' are human value judgements and as such, it will be possible to argue for and against them 20:26:40 [Ian] q+ 20:27:33 [Ian] TBL: Which assertion is wrong? 20:27:39 [Ian] NW: Don't say that the URI refers to a document. 20:27:49 [Ian] DC: TBL's argument is rationale, but it's not compelling. 20:28:02 [Ian] TBL: So the argument that the information content will always be there is not compelling? 20:28:23 [Ian] [TBL and CL disagree whether consistency is a human value judgment.] 20:28:24 [Norm] q? 20:29:02 [Ian] DC: The CYC ontology is coherent, but saying it's web arch at this point seems premature to me. 20:29:15 [Ian] DC: Not every web master has agreed to CYC documentation and agreed to it. 20:30:00 [TBray] The genie's out of the bottle already, just like qnames in content 20:30:08 [Ian] TBL: The cost of not agreeing to this point is very high. The language (which one?) will have to be reverse engineered in a year. 20:30:27 [Ian] NW: I don't think TBL has made the argument in a compelling fashion yet. 20:30:32 [Norm] what tbray said 20:31:05 [Ian] IJ: Any summary on this part of the discussion? 20:31:14 [DanC_g] (I'm OK with gaps in the IRC log; in fact, if people have higher expectations than that, they should think again.) 20:31:14 [Ian] TBray: no. 20:31:18 [Ian] :) 20:31:54 [DanC_g] hey... we got a decision about changing "bindings" to meaning! That's non-trivial! 20:32:25 [TBray] This isn't supposed to be easy 20:32:49 [Ian] NW: Thanks to all, especially PH. 20:33:00 [Zakim] -TBray 20:33:00 [Zakim] -DanC 20:33:01 [Zakim] -Norm 20:33:03 [Zakim] -Paul 20:33:03 [Ian] PH: I won't make my "crazy suggestion" anymore; it's been shot down. :) 20:33:04 [Zakim] - +1.334.933.aaaa 20:33:04 [Ian] ADJOURNED 20:33:04 [Zakim] -PatHayes 20:33:06 [Zakim] -TimBL 20:33:07 [Ian] RRSAgent, stop
http://www.w3.org/2003/07/28-tagmem-irc.html
CC-MAIN-2017-04
refinedweb
5,797
79.6
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <hiduser.h> bool HID_GetReport ( void); The HID_SetReport function obtains report data from the host by copying them from the endpoint 0 buffer (EP0Buf). The function calls SetOutReport to update other application variables. Modify the HID_SetReport function to obtain as many bytes as your application needs from the host. The HID_SetReport function supports the request HID_REPORT_OUTPUT only. The HID_SetReport function is part of the USB Function Driver layer of the RL-USB Software Stack. Note The HID_SetReport function returns: HID_GetReport #include <hiduser.h> bool HID_SetReport (void) { switch (SetupPacket.wValue.WB.H) { case HID_REPORT_INPUT: OutReport = EPoBuf [0]; // copy report vars to the endpoint buffer. SetInReport (); // your function to update the report vars. break; … }.
https://www.keil.com/support/man/docs/rlarm/rlarm_hid_setreport.htm
CC-MAIN-2020-34
refinedweb
124
51.34
Getting started with using OpenGL / freeglut in a Microsoft Visual Studio environment for 32 bit versions. 1. Obtain freeglut Go to the freeglut site to obtain the MSVC package: At the time of writing this demonstration we obtain version 3.0.0 in zip file format: 2. Obtain glew At the time of writing this demonstration we obtain version 1.13.10 in zip file format: 3. Place freeglut and glew in the location of your choice: 4. Create a new empty project in Visual Studio Add your main.cpp source code file. Don’t have to put anything in it for the time being: 5. Set the Additional Include Directories Select project properties > C/C++ > General tab > Additional Include Directories. Do this for the freeglut and glew packages: 6. Set the Additional Library Directories Select project properties > Linker > General > Additional Library Directories. Again, do this for the freeglut and glew packages: 7. Place the freeglut dll in your project location. freeglut.dll is found in the \bin folder: Take a copy of the dll and paste it into your project location (where main.cpp is located): 8. Try with a simple example #include <GL\glew.h> #include <GL\freeglut.h> void Display() { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_POLYGON); glVertex2f(0.0, 0.0); glVertex2f(0.5, 0.0); glVertex2f(0.5, 0.5); glVertex2f(0.0, 0.5);); glutMainLoop(); return 0; } Build and run the project to give you the following result: Try it out and see. Feel free to contact me if you have any problems and or want to give feedback. For setting up OpenGL in Linux environments see this related post: I need some OpenGL 4.X code which is executable at visual studio 2015 (visual C++). I am using windows 2010. I need code for 3D cloth modeling and game design. please help me as soon as possible. thanks in advance .
http://www.technical-recipes.com/2016/using-opengl-in-microsoft-visual-studio/
CC-MAIN-2017-43
refinedweb
314
69.18
Releases allocated memory #include <stdlib.h> void free ( void *ptr ); After you have finished using a memory block that you allocated by calling malloc( ), calloc( ) or realloc( ), the free( ) function releases it to the system for recycling. The pointer argument must be the exact address furnished by the allocating function, otherwise the behavior is undefined. If the argument is a null pointer, free( ) does nothing. In any case, free( ) has no return value. char *ptr; /* Obtain a block of 4096 bytes ... */ ptr = calloc(4096, sizeof(char)); if ( ptr == NULL ) fprintf( stderr, "Insufficient memory.\n" ), abort( ); else { /* ... use the memory block ... */ strncpy( ptr, "Imagine this is a long string.\n", 4095 ); fputs( stdout, ptr ); /* ... and release it. */ free( ptr ); } malloc( ), calloc( ), realloc( )
http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-92.html
CC-MAIN-2018-43
refinedweb
121
62.44
10 November 2008 08:51 [Source: ICIS news] SINGAPORE (ICIS news)--Crude prices rose more than $3/bbl on Monday, boosted by cuts in Saudi Arabia’s term supplies to Asia, and news that China had launched a stimulus package worth about $600bn (€468bn) to alleviate the effects of the global economic slowdown. ?xml:namespace> At 16:11 hours Singapore time (0811) GMT on Monday, December NYMEX light sweet crude futures were trading at $63.56/bbl, up $2.52/bbl on last Friday’s settlement level, after rising as much as $3.26/bbl at $64.30/bbl earlier. At the same time, December Brent on ?xml:namespace> Rises in crude mirrored sharp gains in Asian stock markets, with the Nikkei in The announcement came amid the G20 economic summit in Prices were also buoyed by news
http://www.icis.com/Articles/2008/11/10/9170142/brent-rises-3bbl-on-saudi-cuts-china-stimulus.html
CC-MAIN-2014-42
refinedweb
138
62.27
How to make a Callback Call in C #? Answer 1, Authority 100% everything is simple: C++: void do_action (int & amp x, void (* f) (int & amp;)) {f (x) ; } int i = 5; do_action (i, twice); Void Twice (int & amp; x) {x = 2 * x; } C #: Public Delegate F (Ref int x); // How would Typedef Void Doaction (Ref int x, f f) {f (Ref x); } Void MakeDouble (Ref int x) {x = 2 * x; } int i = 5; Doaction (Ref i, MacDouble); Note that Makedouble may be, unlike C++, the non-static method (!). For normal cases, when your callback does not take Out / Ref arguments, there are harvested delegates, and almost always use them: // FUNC & LT; INT, INT & GT; - function receiving int and returning too INT int apply (int x, func & lt; int, int & gt; f) {return f (x); } int getDouble (int x) {Return 2 * x; } int i = 5; i = apply (i, getduble); or it is possible, with lambdam: int i = 5; i = apply (i, x = & gt; 2 * x); Answer 2, Authority 40% event? I can be wrong, but I did something like this: public class artdmxeventargs: Eventargs { Public iPaddress Sender; Public Uint Universe; Public BYTE [] DATA; } Public Delegate Void ArtDmxhandler (Object Sender, ArtDMxEventargs E); Class ArtNetnode. { Public Event ArtDmxhandler OnartDMX; // ... Private Void FireOnartDMX (iPaddress Sender, Uint Universe, Byte [] Data) { if (onartdmx == NULL) Return; Artdmxeventargs e = new artdmxeventargs (); E.Sender = Sender; E.Data = DATA; E.UNIVERSE = Universe; onartdmx (this, e); } // ... } Client code: { // Adding a handler Node.onartdmx + = new artdmxhandler (onartdmx); } Private Void Onartdmx (Object Sender, ArtDmxEventargs E) { // Processor, called by the parish package } All callbacks, as far as I met, are made on such mechanics.
https://computicket.co.za/c-callback-call/
CC-MAIN-2022-21
refinedweb
269
55.58
Please help in drawing Flowchart December 10, 2010 at 11:42 PM. ... View Questions/Answers NSUserDefaults Example December 10, 2010 at 10:56 PM Hi, Give me code of NSUserDefaults to store and retrieve the some data. Thanks ... View Questions/Answers MPVolumeView Example code December 10, 2010 at 10:52 PM Hi, Please let's know how I can add MPVolumeView control in my iPhone application? Thanks ... View Questions/Answers Interface December 10, 2010 at 7:59 PM In JDBC , where are the source code of methods which are the member of recordset and other interfaces of JDBC packege. ... View Questions/Answers JSP Project December 10, 2010 at 7:56 PM I want to get a code for login page with user interface in JSP for my web application. Please help me. ... View Questions/Answers java December 10, 2010 at 7:32 PM where i write java programme and how do i execute the programme ... View Questions/Answers how formBackingObject() method will work. December 10, 2010 at 7:26 PM How the formBackingObject method will behave ? ... View Questions/Answers Inheritance, abstract classes December 10, 2010 at 6:43 PM Hi. ... View Questions/Answers checkbox custom tag creation in jsf December 10, 2010 at 5:32 PM how we... View Questions/Answers JSP,DB,SERVLET December 10, 2010 at 5:12 PM hi thank you for your reply.With this code i can insert the data successfully into database but In ajax.jsp once i give the name;;;age and city are not automatically fetched......instead sumthing get filled in dose textfields....can u help me?????? Name: Ram I entered dtls of ram into da... View Questions/Answers JSP,DB,SERVLET December 10, 2010 at 4:38 PM I have a jsp page called page1.jsp with 3 text fields name,phone ,city.i populated these datas into a database table through servlet (page1servlet.java) and bean(page1bean.java).I have another jsp page(display.jsp) with 4 fields name, city ,phone.Once the user enter the name, city and phone shoul... View Questions/Answers sql express connection December 10, 2010 at 4:27 PM Sir, I am using ms-SQL server 2005 as database. i have .mdf file and .ldf database base files with me. How to connect to these files in my web application. I am using type-4 drivers... Please let me know.... thanks, Regards, VASU. ... View Questions/Answers Read data from Excel and insert in to DB and export data from DB to Excel December 10, 2010 at 4:23 PM Read data from Excel and insert in to DB and export data from DB to Excel Hi, I need to read the data from excel and I need to insert the same in to DB (vice versa). I have an Excel sheet contains 10 Telephone numbers, I have customer database in which I have stored all customer i... View Questions/Answers JSP,DB,SERVLET December 10, 2010 at 4:22 PM hi thank you for your reply.With this code i can insert the data successfully into database but once i give submit button in insert.jsp it should be forwarded to ajax.jsp.In ajax.jsp once i give the name;;;age and city are not automatically fetched......can u help me?????? And can u send ... View Questions/Answers java code December 10, 2010 at 2:01 PM hi any one please tell me the java code to access any link i mean which method of which class is used to open any link in java program. ... View Questions/Answers java December 10, 2010 at 1:35 PM hi im new to java plz suggest me how to master java....saifjunaid@gmail.com ... View Questions/Answers JFreeChart- Display coordinate value . December 10, 2010 at 12:06 PM How to Mark Coordinte value on top of the bar ? ... View Questions/Answers CLLocation latitude longitude December 10, 2010 at 11:41 AM Hi, I am developing an location based application. I want to get the current location in the Latitude and Longitude format. Please let's know how to get latitude longitude from CLLocation object? Give me the code example of converting CLLocation object into Latitude and Longitude.... View Questions/Answers JSP,DB,SERVLET December 10, 2010 at 11:31 AM I have a jsp page with 3 text fields name,age ,city.i populated these datas into a database table.I have another jsp page with 4 fields name, city , age and phone.Once the user enter the name, city and age should be automatically populated from database throush servlet..Can u give me the code!!!!... View Questions/Answers parse a file into primitive binary format December 10, 2010 at 6:12 AM Hi, I need help converting an audio file or any other file to its primitive binarty format, and when I read the inputstream I find it some integers and I don't know its binary representation. please I need any help or tip. regards thanks ... View Questions/Answers Netbeans Question need help desperately!! December 10, 2010 at 5:02 AM Ok here is my code- public class RollDie2 { public static void main(String[] args) { Random randomNumber = new Random(); int[][] frequency = new int[9][9]; int die1 = 0; int die2 = 0; for (int roll = 1; roll <= 64000; roll++) { die1 = randomNumber.nextIn... View Questions/Answers reverse words in a string(words seperated byone or more spaces) December 10, 2010 at 2:19 AM reverse words in a string(words separated by one or more spaces) ... View Questions/Answers static keyword with real example December 9, 2010 at 10:36 PM static keyword with real examplestrong text ... View Questions/Answers cookies December 9, 2010 at 10:24 PM hi, i m vishal want a simple example of cookies. ... View Questions/Answers Matrix multiplication December 9, 2010 at 9:45 PM program to read the elements of the given two matrices of order n*n and to perform the matrix multiplication. ... View Questions/Answers Standard deviation December 9, 2010 at 9:40 PM program to calculate the standard deviation of an array of values.the array elements are read from terminal.use function to calculate standard deviation and mean. ... View Questions/Answers problem in validation December 9, 2010 at 9:18 PM sir/madam, i m using struts-1.3.10. i m getting a problem my properties file is not found.... while i hav configuired it in struts-config.xml file, like thanks n regards himanshu ... View Questions/Answers java December 9, 2010 at 8:58 PM code for creating smileys ... View Questions/Answers Import My Own Package (Automatically) December 9, 2010 at 7:16 PM How can I import my own package in every java program ( automatically )....? For example :- java.lang.String... automatically imported, we need not to import it.... ... View Questions/Answers JBOSS Startup Problem - StandardWrapper.Throwable java.lang.NullPointerException at java.util.Hashtable.put(Hashtable.java:394) December 9, 2010 at 6:19 PM Hi Gurus!!!! :D.. I loc... View Questions/Answers array ollection with in script December 9, 2010 at 5:37 PM my coding is ... View Questions/Answers UIImage width and height December 9, 2010 at 5:23 PM Hi, How I can find the width and height of UIImage object. Let's know an example code for finding UIImage width and height. Thanks ... View Questions/Answers iphone handle orientation change December 9, 2010 at 4:01 PM How can i handle the orientation change in iPhone / iPad application? Please suggest. Thank in Advance! ... View Questions/Answers script for data December 9, 2010 at 3:31 PM how to write a simple script to display a selected data from list in flex applicatin?. ... View Questions/Answers multiple file upload at a time like gmail December 9, 2010 at 3:04 PM I want to upload multiple files at a time like gmail. is there any possiblity/solution to do that? ... View Questions/Answers home page December 9, 2010 at 2:04 PM Develop home page using applets and swings ... View Questions/Answers Java question_Naresh December 9, 2010 at 1:23 PM e... View Questions/Answers hibernate and struts integration in ECLIPSE December 9, 2010 at 12:55 PM hi, I saw the complete tutorial of hibernate in your website... But i found nothing for hibernate and struts integration in ECLIPSE and not using ANT Can any One pls Help.. Thanks in Advance ... View Questions/Answers how to give validation using javascript in swing December 9, 2010 at 12:54 PM how to give validation using javascript in swing....... can somebody give code for username and password validation using javscript and swing ... View Questions/Answers how to move curosr from one text field to another automatically when the first textfield reaches its maximum length December 9, 2010 at 12:51 PM how to move curosr from one text field to another automatically when the first textfield reaches its maximum length ... View Questions/Answers Need Jar in the JPA examples December 9, 2010 at 12:49 PM JPA - tutorial is good, very easy to understand and simple. But to run the examples, the jar/helping files are needed. and how Hibernates are used in the JPA example? Thnaks Abhijit Das ... View Questions/Answers static December 9, 2010 at 12:17 PM when staic memory deleted ... View Questions/Answers ipad screen resolution pixels December 9, 2010 at 12:14 PM Hi! Can you tell the exact size of the iPad in pixels ..including height & width. Thanks Very Much! ... View Questions/Answers Integration December 9, 2010 at 12:11 PM How to integrate struts with spring in MyEeclipse in layer independent manner. Please send the screen shots. How to integrate struts with EJB in MyEeclipse in layer independent manner. Please send the screen shots. How to integrate spring with Hibernate in MyEeclipse in layer i... View Questions/Answers How to read a log file from server location and send a mail when exception occurs. December 9, 2010 at 12:03 PM HI, How can i read the 2GB to 4GB log file from server. and i need to send a mail when exception occurs. Thanks Rajeev ... View Questions/Answers object of object class December 9, 2010 at 11:33 AM what do u mean by "object of object class"....? ... View Questions/Answers how can you calculate you your age in daies?? December 9, 2010 at 11:15 AM **hi, I am beginner in java! can any one help me to write programm to calculate age in daies???** ... View Questions/Answers radar graph in java December 9, 2010 at 10:58 AM Hi All I am developing a java application. In this application i need to show a radar grap.when some body assigne rating then each rating should show on x y z axis. please suggest me some good way to do this task. Thanks Yogi ... View Questions/Answers java question December 9, 2010 at 10:42 AM wats dynamic dispatching?? ... View Questions/Answers Iphone Developmnt December 9, 2010 at 9:38 AM i am making an application in ios, for ipad or iphone, what i want to know is that can i provide an option of changing the theme , or can i separate my code from the theme... so that when giving my product to next customer, i can change the look and feel please guide me through Re... View Questions/Answers Java Leap Year Problem December 9, 2010 at 3:26 AM I need help with a writing a program in Java that creates a class Year that contains a data field that holds the number of months in a year and the number of days in a year. It should include a get method that displays the number of days and a constructor that sets the number of months to 12 and ... View Questions/Answers Exception Handling in java December 9, 2010 at 12:36 AM what is advantage to catch smaller exception first and then normal exception. I mean we normally catch SQLException first and then Exception. ... View Questions/Answers how to get data using dropdownlist, textbox and search button in jsp December 9, 2010 at 12:08 AM Hi, .......... View Questions/Answers JAVA Game December 8, 2010 at 11:19 PM I want to make a JAVA game...in dat game...d player has to input a word in 10 secnds...4 dat m running a delay loop... but the problem is not...i am not getting any way...to make that input field(eg: String str=in.readLine();) valid only till 10 secs...or if i can calculate in any way the time ta... View Questions/Answers servlet December 8, 2010 at 11:11 PM i want to use servlet application on my web page then what should i do. I have already webspace. ... View Questions/Answers conver byte array to ByteArrayOutputStream December 8, 2010 at 10:15 PM Sir Can you please tell me how to convert the byte[] to ByteArrayOutputStream; One of my method will return the byte[] , i have to convert this byte array to ByteArrayOutputStream Thanks Rahul ... View Questions/Answers Dear Sir I need to join u December 8, 2010 at 10:04 PM Dear Sir i am santosh Rai. San... View Questions/Answers hi i want to develop a code for when user clicks on forgot password then the next page should be enter his mobile no then the password must be sent to his mobile no...! Thanks in advance Nag Raj ... triangle output December 8, 2010 at 7:24 PM program to get the following output: * * ... View Questions/Answers triangle output December 8, 2010 at 7:23 PM program to get the following output: * * ... View Questions/Answers How to go about struts-2 vaidation framework/annotations with simple ui theme? December 8, 2010 at 7:22 PM I... View Questions/Answers number sorting December 8, 2010 at 7:18 PM program to sort a list of numbers in descending order where the inputs are entered from keyboard in runtime ... View Questions/Answers help December 8, 2010 at 6:33 PM i can't answer this question....state 2 different array processing which use loops. ... View Questions/Answers Hibernate Training December 8, 2010 at 6:22 PM What is the time duration for this training? Am I going to get some certificate after this training? How much is the fees for this? What will be the method - desktop sharing or con call or you will just provide materials? ... View Questions/Answers JavaScript Defer Attribute December 8, 2010 at 6:16 PM defer attribute in javascript ... View Questions/Answers Spring Restful webservice and client example December 8, 2010 at 5:54 PM Hi, I need the Spring Restful webservices generation using dao ,dto and controller format. Then tell me how can i generate the client for the above service. Please send the code immediately.Its urgently need..... Thanks, Valarmathi P ... View Questions/Answers How do i create the node for target SMO in java..??? December 8, 2010 at 5:41 PM How do i create the node for target SMO in java..??? or else whats the method for accessing the target SMO?? ... View Questions/Answers have any one tried ajaxanywhere with jsp/servlet please provide sample December 8, 2010 at 5:39 PM hello you can find this app here: it explains how to use this with struts but not with simple jsp servlet(or i am confused ) also in netbeans when i added this jar as said ,it shows me the method name AA... View Questions/Answers How to convert String double quotos to String double double quotes( December 8, 2010 at 5:27 PM Hi How to convert String double quotos to String double double quotes("----->"") By replace? Problem is: String column is "Age is 60" now . whenver displaying the column value in csv ,showing th "Age is 60 now". CSV: probleM:"Age is 60 now". Required :"Age is 60" now. ... View Questions/Answers java December 8, 2010 at 5:14 PM what 2 access modifiers that can encapsulate the attributes of class?? and what is the concept of printf? ... View Questions/Answers Web Service - Which files we have to give to client and how they will use that December 8, 2010 at 4:55 PM Hello,) crea... View Questions/Answers printf and println December 8, 2010 at 4:17 PM what is the differences between printf and println? ... View Questions/Answers Looking for help December 8, 2010 at 3:12 PM I am looking for a valid helper to solve flex 4 problems I have found after switching from flex 3. We would work directly on my code with team viewer in the beginning. Thanks for your help Paolo ... View Questions/Answers Referencing components in flex 4 December 8, 2010 at 3:00 PM I used to create flex 3 applications using canvas as main components. In this moment I am creating flex 4 applications but components (now based on s:Group) are not referenced inside actionscript the same way. Many times I used the following syntax to capture the web service resul... View Questions/Answers java December 8, 2010 at 2:40 PM can we must define methods in abstract class? ... View Questions/Answers java December 8, 2010 at 2:35 PM can we implement two main class in java? ... View Questions/Answers java December 8, 2010 at 1:04 PM what type of questions will ask in java interview ... View Questions/Answers java December 8, 2010 at 1:01 PM how to learn d java in easy way ... View Questions/Answers Tiles Plugin December 8, 2010 at 12:52 PM I have used tiles plugin in my projects but now I am seeing that each definition is loaded twice tell me the reason of this.This making code written in tiles definition to execute two times and my project may has become slow. ... View Questions/Answers difference b/w == and equals() December 8, 2010 at 12:51 PM what is the difference between == operator and equals() ... View Questions/Answers Jsf biggener December 8, 2010 at 12:43 PM hi friends, am new to jsf,i got an exception while running my application.am using netbeans ide to run my application,below is the exception detail: type Exception report message descriptionThe server encountered an internal error () that prevented it from fulfillin... View Questions/Answers About Java2 December 8, 2010 at 12:35 PM sir i want to a develop one text editor but the problem is that i m not able to save our file as text formet or any formet Sir plz guide me ... View Questions/Answers about java1 December 8, 2010 at 12:32 PM Sir, i want to know how we develop 3d button ,lable,textfield etc. in java . sir plz give one program as well Thank you ... View Questions/Answers diff detween jdbc and hibernet December 8, 2010 at 12:26 PM plz give me the reply ... View Questions/Answers i want to find the byte code of a image file ... for my project..plz if anybody help me for java coding i will grateful.. December 8, 2010 at 11:16 AM i want to convert a image file to its byte code format that help me for the pattern matching in my project.. but i cant convert Image file to its byte code format.. if anybody can plz help me ... View Questions/Answers To change font in java code December 8, 2010 at 10:20 AM I am sending system generated mail through MIME message and Transport. Now i need to change the font.. Can you please help me as how to change the font of string body in java code ... View Questions/Answers wt is the advantage of myeclipse ide compare to others December 8, 2010 at 9:49 AM plz give me the reply ... View Questions/Answers why to use hibernet as a dataacces layer December 8, 2010 at 9:47 AM plz give me the reply ... View Questions/Answers wt are the components in the hibernet December 8, 2010 at 9:44 AM plz give me the reply ... View Questions/Answers why to use hibernet vs jdbc December 8, 2010 at 9:39 AM plz send me the reply ... View Questions/Answers reverse December 8, 2010 at 7:22 AM program to read a string and print the reverse string ... View Questions/Answers Netbeans program December 8, 2010 at 5:29 AM I need to write a program that does the following... The TicTacToe class contains a 3x3 two-dimensional array of integers. The constructor should initialize the empty board to all zeros. It should have a method playerMove that accepts two integers as parameters, the first is the player (1... View Questions/Answers Netbeans Question. December 8, 2010 at 5:05 AM Ok here is my code- * * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.util.Random; /** * */ public class ContinueRollDie { * @param args the command line arguments */ public static void main(String[] args) {... View Questions/Answers java December 7, 2010 at 8:50 PM i want program to create package for basic mathematical operations such as addition, subtract, division, multiply? ... View Questions/Answers how to run servlet December 7, 2010 at 8:16 PM Servlet run procedure for J2EE Software ... View Questions/Answers give the code for this ques/// December 7, 2010 at 8:13 PM write a program in java which contains a class simple. this class contains 4 members variables namely a,b,c,and d. This class also contains 3 constructors of 2,3 and 4 arguments and a function show().Now in main() ,create a object which calls a constructors of 4 arguments ,a 4 argument construct...
http://www.roseindia.net/answers/questions/181
CC-MAIN-2016-07
refinedweb
3,626
64.1
Red Hat Bugzilla – Bug 56995 alchemist-1.0.18-1 is not up-to-date with python2 Last modified: 2008-05-01 11:38:01 EDT If I upgrade python from 1.5.2-30 to 2.2-0.10b2, it breaks printconf-gui with the following error message: # printconf-gui Traceback (most recent call last): File "/usr/sbin/printconf-gui", line 7, in ? import printconf_gui File "/usr/share/printconf/util/printconf_gui.py", line 40, in ? from printconf_conf import * File "/usr/share/printconf/util/printconf_conf.py", line 509, in ? from pyalchemist import * ImportError: No module named pyalchemist The problem is that the module pyalchemist is installed in /usr/lib/python1.5/site-package/ and then connot be found by python2. 1.0.20-1 is compiled for python 2.2
https://bugzilla.redhat.com/show_bug.cgi?id=56995
CC-MAIN-2017-47
refinedweb
130
54.69
In ASP.NET, how can I open an MS-DOS window using a hyperlink? This Article Covers Win Development Resources I am working on ASP.NET and I would like to know how I can open an MS-DOS window by using a hyperlink. Can you tell/show me how to do this? There are two sides to this question. You may want to execute a program from the command prompt on the server side, after a click in a hyperlink server control, or you may want to open the commmand prompt on the client side after the user clicks a link. The first case may be achieved with the following code in the hyperlink's click event handler: System.Diagnostics.Process.Start("cmd"); You can also use the ProcessStartInfo class in the same namespace to gain better control over the process execution, such as redirecting the standard input/output streams, waiting for completion, etc. The second case is a matter of client-side scripting. I'm sure there are other ways of doing this, but one of them is using the Shell object from WSH (which must be installed on the client machine, of course): Set WshShell = CreateObject("WScript.Shell") WshShell.Run("cmd.exe") You will receive the usual warning about security and ActiveX execution.
http://searchwindevelopment.techtarget.com/answer/In-ASPNET-how-can-I-open-an-MS-DOS-window-using-a-hyperlink
CC-MAIN-2016-26
refinedweb
216
70.13