Monday, January 21, 2019

#176 - Sunday, January 20, 2019 - Lunar Eclipse!

Sunday night was the "Super Blood Wolf Moon," as I'm sure you've heard by now a dozen times.  If you need words like "super," "blood," and "wolf," to get you excited about astronomy, like that's cool and stuff, but really it's just a lunar eclipse!  It does happen to coincide with a "supermoon," which is when the moon is slightly closer to Earth in its elliptical orbit during its full phase.  The "wolf" part comes from the fact that every month's full moon has a special name, usually said to be from Native American tradition -- according to the Farmer's Almanac, at least.  (I loved the Farmer's Almanac when I was growing up -- I'm a little bit of a data junkie).

When I got started in astrophotography in 2015, I felt like lunar eclipses were common.  I didn't use to pay a whole lot of attention to celestial events before I got a telescope, but I knew there had been several lunar eclipses visible in North America in the previous two years (September 28, 2015; April 4, 2015; October 8, 2014; and April 15, 2014, to be exact).  We had a lot of clouds in September 2015, and I was out of town a fair bit, so I missed my first lunar eclipse with a telescope.  I wasn't worried though; they happened so often!

Well, it would be more than three years before my next opportunity came.

I saw this one on the list of eclipses when I was doing solar eclipse research back in 2017, and I made sure to mark my calendar!  As it turned out, I was going to be out of town that weekend, so I made sure to bring some gear with me.  I wish I had been able to set up my scope!

I recently bought a used Sky-Watcher Star Adventurer tracker mount from a good astro-buddy of mine, which I am hoping will be a step up from my Vixen Polarie in both features and accuracy.  I've had a pretty tough time with the Polarie, and it would take the purchase of a lot of accessories to make it better.  The Star Adventurer came with everything I needed -- counterweights, polar scope, 3/8 tripod connection, and even some really cool additional features like built-in timelapse triggering, the ability to slew in RA, and additional speeds, up to 12x sidereal (for timelapse).

I won't normally have the center post extended like this, but I just whipped the setup together to figure out how to build it.  

I bought a new camera bag to hold both the Star Adventurer and the Polarie, and it's a perfect fit.  I even have spare batteries, USB cables, velcro, and a couple tools in there.  In another camera bag, I brought my Nikon D5300 and D3100, all three of my lenses, and my intervalometer and remote release.

I spent a goodly portion of Saturday writing the BackyardNikon script.  I used Mr. Eclipse's (Fred Espenak) lunar eclipse shooting table for guidance, and then I tested the time it took between frames to download and capture the next frame so that I could work out the timing.  I also used the neat graphic on Time and Date to see what time each event was going to occur.

I kept track of what phase at what time needed which exposures in a spreadsheet.

In addition to the ones on the spreadsheet, I also had a lot of exposures at ISO-200, and in exposure times around the ones listed. 

While I was making sure everything was going to work, on a whim I tried plugging my new intervalometer (my last one succumbed to dew) into the D3100.  None of the three intervalometers I have tried with it have worked (despite that fact that a remote shutter release on the same port works perfectly fine).  But much to my surprise, it worked!  And not only that, but it worked for long exposures too.  This is a game-changer!  Now I can put the D3100 to work at night again.  It still won't talk to the computer besides letting me offload the files, but I can set it up to run on its own all night long if I only need one exposure time and ISO setting.  Which is most of the time.  Woo hoo!  Now I don't have to buy another DSLR to do that...

At about 8 PM local, I went down to the spot I had scouted earlier, which was just outside a door so that I could go back inside where it was warm.  I set up inside and then brought everything outside.  I put the D5300 on the Star Adventurer, and then my D3100 on this little 6-inch tripod I have to record the whole thing wide-angle.  I got everything built, but couldn't find the USB cable for my D5300.  I scoured my camera bags, but it was nowhere to be found! So I had one of my friends watch my gear and I dashed back to my hotel room to see if I left it on the desk -- but it wasn't there either.  It's a special proprietary connection of course (dear Nikon, please just switch to micro USB, or even USB-C now), so if I couldn't find it, I was going to have to use the intervalometer for the partial phases and control it myself around and during totality.  After not finding it in my room, I once more tore my bags apart, and finally found it hiding underneath a ball mount in the mount bag!  I rushed everything outside and just disabled the first line so that it would start at sequence 2 at 8:40 PM.  After making sure everything was taking off, I went back inside and had a couple beers with some new friends.

I couldn't actually polar align the Star Adventurer because my spot with the southerly view and the nearby warmth also meant that the building was blocking Polaris.  Luckily, the moon would be short exposures, not the long ones I usually do for deep sky objects, so good polar alignment wasn't important -- I just needed to be close (which my phone's compass and GPS coordinates for latitude helped with).  I did have to periodically adjust where the camera was pointing, however, since the moon was starting to drift toward the edges of the frame.

By about 9:20 PM local, you could start to see the bottom of the moon beginning to darken.  Over the next hour, the bite taken out of the moon grew larger and larger.  

Camera: Nikon D5300
Lens: Nikon 70-300mm lens at 300mm, f/5.6
Mount: Sky-Watcher Star Adventurer
ISO-200, 1/2000s

We popped outside every once in a while to check on the progress.  I also checked that my exposures were still going.  

As totality neared, the moon started to look very cool.  We stood outside for about 20 minutes watching the daylight shrink.

ISO-400, 1s

Finally, it was totally in the Earth's shadow!  It was pretty dark.  It looked to me like a dim red-brown button on a black velvet pillow.  The sky was very dark for being in a city -- the transparency was excellent, which helped.  It was beautiful and eerie.  Imagine being some ancient person and having no idea what was happening to the moon!

ISO-200, 1/2s

My script had gotten a little behind, so I stopped it, disabled all of the sequences up till the time it was according to my spreadsheet, and hit the go button again.  I also took a number of my own exposures on the camera during the pauses, although I forgot to put on the remote shutter release, so many of those turned out blurred.  The focus had also slipped a bit, and at some point dew formed, since I didn't have my dew heaters up close enough to the aperture.  

A couple times throughout, I had to move the mini-tripod in order to keep the moon in the field-of-view of the D3100 with my 18-55mm lens at 18mm.  It's not quite wide enough to get the whole thing.  I will figure out some way to mosaic them...

Around 1:40 AM, everything was over, the moon was back to being bright and full, and I was tired!  It was a great night and an amazing event.  I couldn't have asked for clearer skies (although warmer temperatures would have been nice!)  It was also windless, which helped a lot too.  

The next step was to come up with cool ways to display the images.  This one is my favorite so far.

I will keep working on more!

Here's a video:

The D3100 ran on the intervalometer, and with a couple changes in exposure time and position so that the moon wouldn't go out of the camera's field-of-view, I put together this cool sequence shot!

Monday, January 14, 2019

#175 - Saturday, January 5, 2019 - Winter Fun

I definitely put "2018" in the title of this post and had to change it to 2019.  Oh January!

Astro-buddies do more than just show off views in each other's scopes and try to find each other by voice alone in the dark -- we also sometimes hang out during the day and do non-astronomy stuff!  A group of about 15 of us went to the same park that our club's observatory is located in and went for a hike along the scenic trails.  It was muddy, but it was a sunny day with temperatures closing in on 50 degrees, so it was beautiful!

Me crossing an overflowing creek with the help of my trekking pole!

Because of all the rain we'd had over the prior couple of days, the creeks and river flowing through the area were pretty high and rather muddy, which made for some extra adventure!  After the hike, we all went and had lunch at a Caribbean-themed local restaurant, which was delicious.

Lunch ran long because of the large group, so I dashed home to grab my gear and return to the park for member's night.  The skies were very clear, but the forecasts were mixed on whether there would be thin, high clouds, so I couldn't decide whether I wanted to bring my telescope out or not.  Ordinarily, especially during the winter, I like to use my club's memorial telescope in its own dome on the observatory property, since all I have to do is plug in my camera and get on target.  But it's open for everyone to use for member's night, so I had to decide whether to bring my own telescope rig, or just do widefield imaging on my Vixen Polarie.  By the time I got home, I'd made up my mind - I would be crazy to not  bring my scope out with that good of a possibility of it being clear! How lazy have I become??  Especially since my Celestron AVX + Borg combination is so light and easy to set up.  Now, that did mean I had to bring a lot of stuff: the mount box, mount tripod, Borg case, ZWO and guide camera case, Nikon and Vixen Polarie camera case, camera tripod, tool box (giant tackle box that has everything from tools to adapters to velcro to batteries), accessory box (bin with power strip, dew heaters, extension cord, AC-to-DC power converters, etc), folding table, and chair.  But it would be worth it if the skies stayed good!

I didn't get out to the park until nearly 6 PM, and member's night started at 5:30.  Since darkness was coming fast, I hurried to set up both the AVX and the Vixen Polarie.  I set up the Polarie first, and worked on the AVX while test frames were coming in so I could hone the polar alignment.  My plan for the Polarie with my Nikon D5300 riding atop was to image the lower half of the Orion constellation, including the Great Nebula of Orion, Flame, and Horsehead Nebulae, and any other nebulosity I could tease out of the background in this very dusty and colorful region.  After messing with the polar alignment until the streaking in the stars was minimized, I tried for 3-minute subframes, but that was still just a big too much for it even at 100mm of focal length, and so was 2 minutes 30 seconds.  I wanted some detail in the nebulae, so I needed very little streaking.  I settled on 2 minutes, set the ISO to 1600, and let it go.  Orion was still too low, but then I could focus on the AVX and I would just delete frames later.

Single 2-minute subframe on the lower half of the Orion constellation

I've been meaning to apply the same backlash fix to the RA axis as I did the dec axis, but haven't gotten to it yet -- I've been suuuuper busy lately.  In addition, the dec axis was weirdly tight (I still don't think I have the clutch knob in exactly the right position), so I couldn't really balance it.  After getting it built, the first thing I did was polar align with SharpCap, but that proved difficult because the backlash in RA meant that the field kept moving up even as I was waiting for frames to come in between mount adjustments.  As a result, my gotos weren't very good at all.  I finally bought a dual-finderscope mount so I could have my red dot finder and my guide scope loaded at the same time, which would have helped the alignment process go smoother if I was actually able to boresight the red dot finder.  The bracket holding it is somewhat tall, and I can't get it to adjust low enough to be looking at the same spot in the sky that the scope is.  I'll need to find a shorter dual bracket...or a more-widely-adjustable lightweight finder.

After aligning, I used Precise Goto to put a target that has been on my list for a while now, but keeps not happening for one reason or another -- the Cone Nebula.  But since my gotos weren't good, I wasn't totally confident it really did put it in the middle, and I haven't re-set-up plate solving on my replacement tablet yet.  Besides, I didn't want it in the middle -- there's some more cool-shaped nebulosity and a cluster just above it, which I wanted as well.  Conveniently, there's a 5th-magnitude star right in the center of where I'd want the frame to be, 15 Monocerotis, which I was pretty sure I could identify in the frame, so I centered it.  I started taking 3-minute luminance frames on my ZWO ASI1600MM Pro, but I couldn't see it in the raw FITS files.  However, it's not a terribly bright thing, so I decided to cross my fingers that it would come out in processing and pressed forward with imaging, even though there was a chance I wasn't even on it.  Guiding was looking good, however.  With everything rolling, I went inside to warm up my freezing fingers and toes and finally have some conversation with my sky people.

30 luminance frames later, I went outside and carefully flipped to the red filter.  I also grabbed a pair of binoculars so I could take a look at the Great Nebula of Orion, M42, since the sky was pretty darn clear.  I was so glad I had brought out the scope!  The transparency was good enough that many of us could make out the wintertime Milky Way up high, and another club member was able to spot M33 in binoculars.  People looking through dobs and binoculars were ooh-ing and ah-ing at M42.  

I came back out later on to switch to the green filter and saw that the laptop I was lending my minion Miqaela, whose laptop is out of commission at the moment, had fallen off the table!  It was in a tub, so it wasn't on the muddy ground at least, but I had no idea how it'd gotten there.  Then I saw that my chair had fallen onto my mount tripod as well!  It was a bit breezy, so there must have been a gust of wind that knocked a few things over.  Luckily, my camera tripod with the Polarie didn't appear to have moved at all (I hung the AC-to-DC adapter for my camera off of the tripod hook, which helped a bit).  The alignment was off a bit, so I re-added the alignment stars, which happened to be on the same side of the meridian that the Cone Nebula had just crossed into, so I didn't bother with the calibration stars on the east side of the meridian. I did a Precise Goto, but I don't think it got me quite to the same spot.  But the sky quality was declining anyway, and it was getting late, so I decided just not to deal with it because that would mean re-polar-aligning and re-aligning, which I didn't feel like doing in the cold.

Around 12:30 AM, the clouds started rolling in, so it was time to pack up.  We got out of there by 1:30 AM, and I was in bed by 2:20.  Luckily it was a Saturday night!  I'll be curious to see if the Cone Nebula is in my luminance images.  If not, I at least have my Orion constellation widefields to enjoy!

[Update January 13, 2019]


I started with processing the Orion constellation ones because I'm pretty excited about them.  I won't do an exhaustive step-by-step here because I have that in other posts, but I'll outline the workflow in PixInsight 1.8.6 and talk about anything special or new.

First, I created master dark and bias frames (slowly working my way through my library creating these as needed), and the calibrated the light frames with these.  Then I debayered the light frames.

Subframe Selector

Since I had 127 frames after deleting the ones that were too cloudy or too low in the sky, this dataset was a good candidate for me to weed out even more non-ideal frames.  A handy way of doing this is the SubframeSelector script.  One of the parameters it requires is your camera's e/ADU, or electrons per analog-to-digital unit.  When your camera is reading out the image on the sensor, an analog signal of voltage (which was originally electrons collecting on a capacitor) is converted to digitized units.  This conversion depends on the gain setting of the camera.  While astro CCD cameras usually tell you what the e/ADU is for a few different gain settings, this is not the case for DSLRs, and I have long struggled with just having to make a guess of this value.  However, upon further inspection of the Light Vortex Astronomy tutorials, I found a page that talks about a PixInsight tool for calculating it -- because of course there's a PixInsight tool for this!  It's called BasicCCDParameters. The link for the Light Vortex tutorial is here.

I went and located flats, biases, and darks all taken at ISO-1600.  I got biases and darks at the same temperature - 58 degrees - but it's been a while since I've done flats, so I hunted around and found some at 48 degrees.  Hopefully that's close enough.  I also needed to find out what the maximum ADU was, so I needed an image that was saturated.  I remembered that some pictures I took on a road trip through Glacier National Park in 2017 had come out saturated, so I opened up one of them in PixInsight to check is max value.  To do that, I used HistogramTransformation, changed it to 16-bit mode, and looked for the saturation peak.  It was indeed at 16,383, which is what the BasicCCDParameters script detected initially anyway.  I loaded them all into PixInsight, checked the parameters, and hit Report.

It's so interesting to finally see these numbers!  The gain e/ADU that I was looking for is 0.115 e/ADU.  Also worth noting are the read noise of 1.465 electrons, dark current of 0.002 electrons per second, and full well capacity of 1,889 electrons.  That well capacity is quite low (which is not ideal), but the dark current is very low (which is ideal!), like way more than I would have thought!  Same with the readout noise.  My ZWO boasts a read noise of 1.2 electrons, so about on par.  CMOS for the win!

With that value in hand, back now to SubframeSelector and followed the Light Vortex tutorial about it.  After my computer crunched away at analyzing the frames, I first looked at the plots to see how things looked.  SNRWeight was interesting- you can see it increasing throughout the course of the night.  This is a direct result of the haze that slowly drifted in.

After looking at the plots, I made some cuts, using the following expression in the Approval box:
FWHM < 2.9 && Eccentricity < 0.89 && SNRWeight <=3.3
This cut out 23 of my 127 frames.  A reasonable number.

Then I used the expression from the tutorial to calculate the weights.
(15*(1-(FWHM-FWHMMinimum)/(FWHMMaximum-FWHMMinimum)) + 15*(1-(Eccentricity-EccentricityMinimum)/(EccentricityMaximum-EccentricityMinimum)) + 20*(SNRWeight-SNRWeightMinimum)/(SNRWeightMaximum-SNRWeightMinimum))+50

Finally, I went back to the table of values that are shown in the plot and sort by weight so I can find which one is the best subframe.  It's DSC_0172, so I opened it first (the original file using the Windows photo viewer, since you can't open files in PixInsight while you have a script open) to check for airplane or satellite trails.  It was clear, so that is the one I used as the registration reference frame. 

Making it pretty

Once registration and stacking (using the weights as a keyword) were complete, then it was time for DynamicBackgroudExtraction.  Selecting the sample points for this process was tough -- first, I had to decrease the sample size to 10 pixels because there were so many stars in this image, and I had to adjust the location of almost every one of the samples to not be on a star.  Second, this is a very nebulous region, and I'm trying to avoid the dim background nebulosity I can't see so that I can bring it out later.  I had to delete nearly all of the points in the lower right quadrant because I could see nebulosity in them. It shows up as colorful pixels between what are obviously stars in the sample display box of the DBE process.

The result was very exciting!

Look at all that dark nebula!  And that's just a screen stretch!  Get excited!!

The rest of the process was what has now become mostly my workflow:
- Color calibration with PhotometricColorCalibration
- Denoised with MultiscaleLinearTransform, with a lightness mask (stretched)
- Created star model with DynamicPSF for use with Deconvolution
- Created range mask and star mask from already-stretched lightness mask, subtracted two
- Used combo mask for applying deconvolution, 30 iterations
- Stretched with HistogramTransformation
- Contrast enhancement with HDRMultiscaleTransform, with range mask
- CurvesTransformation to touch up curves
- ColorSaturation - boosted reds
- Ran the DarkStructureEnhance script, with lightness mask
- Additional denoising with ACDNR

Aaaaaand here it is!!

Date: 5 January 2019
Object: Orion Widefield
Attempt: 3
Camera: Nikon D5300
Telescope: 55-200mm lens @ 85mm, f/5.6
Accessories: N/A
Mount: Vixen Polarie
Guide scope: N/A
Guide camera: N/A
Subframes: 104x120s (3h28m)
Gain/ISO: ISO-1600
Stacking program: PixInsight 1.8.6
Stacking method (lights): Average, linear fit clipping
Post-Processing program: PixInsight 1.8.6
Darks: 100
Biases: 20
Flats: 0
Temperature: 41F (approx - didn't have thermometer)

I'm very excited about this result!  The core of M42, the Great Nebula of Orion, is blown out as expected, but between having excellent focus, good tracking, and using SubframeSelector, I got some very nice detail for how large my pix scale was (due to the short focal length).  In addition, I picked up some dusty regions, including the elusive Witch Head Nebula in the upper right, which is truly incredible considering the level of light pollution I have to deal with.  Also featured are the Flame Nebula and the Horsehead Nebula at the bottom of Orion's Belt, the Running Man Nebula right beside M42, and the small nebula M78 near the left edge of the image.  See the AstroBin link above for a full list!

I haven't had a chance to see what I got out of the Cone Nebula yet, but hopefully soon I can process the luminance and see if I was even on target!

[Update January 15, 2019]

I wanted to take a look at the luminance data, but in case it was good, I went through the whole process of creating the master dark and bias, calibrating the lights, using SubframeSelector do eliminate less-than-ideal frames (kept 20/25), registering, and stacking.  Aaaaaaand there's nothing there. :O

Just a dust spot and a bunch of stars.  Rawr!  Next time...

I wanted to find out where I was really pointing, so I uploaded the stretched jpeg to  After it finished solving, I clicked the button to view it in World Wide Telescope so I could see where it was on the sky.

Saturday, January 12, 2019

#174 -Thursday, January 3, 2019 - A Cloudy Start to the New Year

After work on Thursday, I had a few errands to run.  First I went to the post office to mail a package.  Then I went to Target to buy one of the storage cube shelving unit things because my astronomy gear is overflowing and I'm losing track of the smaller pieces, such as cameras and devices that have been given to me that I haven't installed and tested yet, my solar filters and finder, extra flashlights, my label maker, etc.  Then I went and got a haircut.  I finally arrived home at about 5:30, and realized that it was clear outside!  I checked the forecasts -- they showed clear!  So I grabbed a tupperware of leftovers, my camera bag, coat, and hat, put on my warm boots, and ran out the door to go to the observatory!

I wasn't the only one who got excited by the forecast -- my minion Miqaela and fellow club member Bob beat me out there, and Miqaela was already set up and waiting for darkness.

By the time I got out there, the sky was looking sort of scrummy, but I had faith that it would clear.  So I opened up the memorial dome, plug in the power, and set up the main imaging camera (ZWO ASI1600MM Pro) and guide camera (QHY5 -- I was having some driver issues on my replacement tablet (after the screen was accidentally shattered) with my QHY5L-II, and I didn't feel like trying to fix it in the cold).  Since it was still hazy, but stars were visible, I went ahead and fine-tuned the polar alignment on the new Celestron CGX-L we put in there recently, which keeps slipping every time I go  back.  I need to double-check that all the bolts are tight the next time I'm out there.

I went back inside to wait for it to clear...and waited...and waited...and finally by about 10:30 we decided to call it a night.  It was just getting worse.  Darn!  Another night lost...but at least I got some sleep mid-week!  Because, y'know, work is important and all.

Hopefully soon...

Tuesday, December 18, 2018

#173 - Tuesday, December 18, 2018 - The Moon and the Library

Over the last year and a half, I have been doing astronomy outreach programs at the local libraries to show patrons how to use the library telescopes.  My astronomy club donated an Orion StarBlast 4.5 to one of the libraries in 2017, and modified it as well to make it easier to use and a little more fool-proof (attaching strings to the caps so they don't get lost, swapping the CR2032 button cell battery on the red dot finder for two AA's instead, etc).  It was so popular that other libraries in the area purchased their own (they're only about $200) and we modified those for them as well.  Patrons can check them out for a week at a time and take them home, camping, out to the local state park or other dark sites, etc.  My Girl Scout co-leader works at the library that got the first scope, and doing the program was her idea.  It's grown since then, and now I've done programs at seven libraries!

This library in particular had 20 people sign up and a waitlist, but only about 10 actually came over the course of the evening, ranging in age from toddlers to elderly.  It was chilly, but not too bad (low 40s), and the sky was fabulously clear!  Sometimes it's cloudy during the programs, and I show them how to use it indoors, and then show the planetarium app Stellarium up on a project to familiarize them with the night sky.  But nothing beats looking through the eyepiece at something yourself!

Before the program started, I was sitting in the audience chairs and chatting with some participants when two middle school-age girls came bounding in.  One of them asked me, "Are you the professional astronomer?" I said "Yes," and she proceeded to give me a giant hug!  She was so excited so meet an astronomer because she loved astronomy so much.  She got a telescope for her birthday one year, but it was accidentally dropped on their first night out, so she didn't get to look through it!  So she was excited to check these out.

First, I went over how the telescope worked and the different parts and pieces.  Then I did a demonstration of how one uses the finderscope to find an object, and then look through the eyepiece to get it centered and focused.  Finally, we took all of the scopes outside (usually the library hosting the program gathers scopes from around the county so that we can have multiple going at once), and set them on tables in the courtyard.  There were a lot of lights, trees, and tall buildings, but the waxing gibbous moon was high, and Mars was easily visible as well (although very small in the scope these days!).  The StarBlasts are very easy to use -- just plop on a table, point, and look.  I nearly always recommend them when people are looking for a telescope for their kids, or want something really simple and cheap to get started with.  Despite the price and small size of the StarBlast, they deliver pretty good views for a starter scope - much more satisfactory than cheap refractors.  You can see open clusters, some bright large nebulae, and the moon quite nicely with them, as well as the rings of Saturn and the moons of Jupiter.

After practicing with the moon and Mars, we all went back inside for hot cocoa and cookies.  It was a fun evening!  I always love doing these events.

Monday, December 17, 2018

#172 - Sunday, December 16, 2018 - A Rude Fog

So there I was, sitting at my desk still trying to process my comet images from last weekend, when I saw the thick blanket of clouds that had been covering us begin to thin about a bit.  I checked Clear Sky, and it showed that it would clear pretty substantially!  The local forecasts agreed, so I quickly threw together a peanut butter & jelly sandwich for dinner, got into my warm clothes, grabbed my camera bags and related gear, and dashed out the door.  There was still some fog floating around from the rain we had over the past few days, but I figured it would clear once the temperature dropped more.  But boy was I wrong!!

I got out there about 6, and the moon was high and about half full, but I figured I could get more more luminance frames and some in-focus green and blue frames on the Crab Nebula to beef up that dataset some and still have it be enough above the moonlight background to be fine.  But since it wasn't going to be up high enough until after 7:30, and there was still some thin fog around, I figured I could work on further fine-tuning the polar alignment on the new Celestron CGX-L that's in the memorial dome first.  But before that, I went ahead and got my Vixen Polarie set up with my Nikon D5300.  Earlier that evening, I got a reminder on my phone that I'd set a while back alerting me that Comet 46P/Wirtanen was passing near the Pleiades - perfect timing!  A clear night!  Hopefully.  I framed the image based on SkySafari, or at least my best guess - the comet had dimmed to mag +8.8, and it was too foggy still (and the moon was too bright) to see it in the subframes.

I figured I'd try to get the comet, the Pleiades, and the California Nebula in one shot with my 55-200mm lens set at 55mm!  So I let that go while I got the memorial dome set up for polar alignment.

I started with a regular polar alignment in SharpCap.  I had to crank up the exposure time to 2 seconds though, and have the gain at 300 with the histogram stretched, to get enough stars to plate solve.  The moonlight and the fog were killing me.

When I finally did get enough stars, it came back with a "fair" grade on the current polar alignment - off by less than half a degree, but still requiring some adjustment.   I expected to be closer than that, seeing as how I had just adjusted it the previous weekend to be as close as I can get with our bad seeing.  But I went ahead and adjusted it anyway.  My next goal was to do a drift alignment, which SharpCap doesn't have, but PHD does.  I hadn't tried using my ZWO ASI1600MM Pro on PHD before, but luckily it didn't take much convincing to get those two to talk to each other.  However, I don't think PHD could handle the 16 megapixel chip, and it doesn't have the ability to crop the image, that I know of.  It was slow at getting frames, and then it kept losing the star, even though I could see at least a few pretty easily.  

Finally, I gave up trying to get PHD to work with my ZWO camera, so instead I pulled out my QHY5L-II guide camera and attached it to the guide scope, since I'd need it there anyway when it cleared up soon.  But I couldn't remember if the mirror diagonal I usually use with my older QHY5 camera put the QHY5L-II far enough away to focus, and I was having trouble getting the guide scope to see a bright star.  So I went to the moon stead, and after fishing around a bit, finally got the moon to appear in the guide scope.  This was further complicated by the fact that I didn't re-align the scope after the polar alignment adjustment (mainly because the fog was starting to thicken).  Moving the focuser didn't seem to help get the moon to be any closer to being resolved, however, so I swapped out cameras back to the QHY5.  I was working on trying to get that re-focused, but by then my fingers were cold, and the fog had suddenly become so thick that I could barely see anything!

Seeing that were was no point in staying, especially since I had work the next morning and I had no idea when the fog might clear, I called it early, packed up, and went home.  What a waste!  The drive home took a while too, since the fog was so thick I had to drive below the speed limit, particularly because I usually need to use my brights on the dark country roads, which is not a good idea in dense fog.  

Monday, December 10, 2018

#171 - Monday, December 10, 2018 - Can't Get Lucky Twice

Last night was gorgeous!  And tonight was originally supposed to be clear too...

As of last night, the Clear Sky forecast said it would be clear tonight.  Then this morning, it decided it would actually be pretty cloudy, but the other four forecasts I check all said clear, even the most pessimistic Clear Outside one.  All day they said this!  So I packed up my cameras, made dinner quickly, and got out to the observatory by 6:30.  It was already dark, but the Crab Nebula wouldn't be high enough until 7:30 anyway.

But staring around 5 PM, the "civil" weather forecasts (as opposed to the astronomy ones) were pushing the hour that the clouds would clear out back, hour by hour, until now it says 10 PM (I'm sitting in the warm room at the observatory writing this).  I've got my Nikon D3100 clicking away hoping to watch the clouds clear, but most likely that timelapse will be boring.  I'm at least taking dark frames on my Nikon D5300, since it's 29 degrees outside, and I have a lot of darks at 30 degrees but not a complete set.

I poked my head outside at 9 PM, but nope!  Still a thick blanket of clouds.  And now the civil weather apps are telling me it won't clear until midnight!  All righty, time to pack it up...Darn!  Hopefully another opportunity comes soon.  Time to go catch up on some sleep.

And since you took the time to read this sad little post, here's a meme.

#170 - Sunday, December 9, 2018 - Back After a Long, Cloudy Hiatus

It's been a month since the last time I was out at the observatory!  It's been cloudy!  But my first night back out was a good one - no moon, pretty good transparency, and the clouds stayed on the horizon.

The new rig my club put inside the memorial dome - a Celestron CGX-L mount with a Meade 127mm f/9 apo - was roughly polar aligned by another club member, but still needed some fine-tuning.

So I hooked up my ZWO ASI1600MM Pro on its filter wheel, ran all the cables and got everything hooked up, and started up SharpCap to run its polar alignment routine.  Polar alignment is important to get right for astrophotography -- you need the mount pointed directly at the north celestial pole in order to have accurate tracking.  SharpCap reported that the mount was off by about 43 arcminutes, which is close enough for visual observing, but is certainly what was causing all the drift I saw last time I was out.

SharpCap tells you whether you need to move the mount up or down and left or right, so I adjusted each direction until it reported an error of only 2-6 arcseconds, which varied every frame due to the atmosphere.  Pretty good!

After rebooting the mount so that I could put in new alignment stars since I changed the polar alignment, I had it go to Vega first, and its first guess was fairly close -- it put the star within the camera's field-of-view, at least.  After doing the two alignment stars plus four calibration stars, I told it to slew to M1, the Crab Nebula.  I had a tough time choosing a target -- we're not far enough into winter yet to have Orion and all of its goodies be high enough to image in the first part of the evening, but all of the summertime goodies were off to the west, which I try to avoid because of the light pollution.  I had originally expected polar alignment to take longer, however, so when the mount finished slewing, the telescope was pointing toward the trees!  So I slewed up to a higher star to focus the camera and guide camera, and then I left it alone for a bit while I set up my other rig for the evening: my Nikon D5300 on my Vixen Polarie.

I just bought a new tripod so that I could have a removable head to replace with the Polarie's Fine Adjustment head (on Amazon here).  I was also thinking about my trip to Chile, so I got one that folds up small and is lightweight.  This one is made of carbon fiber, and it has three leg joins to collapse the legs, which fold upward back along the center post so that the whole thing is less than two feet long.  But if you extend the legs and the center post all the way, it goes up to 63 inches tall.  It also has a hook in the center column to attach a sandbag or something to weight it down, and you can remove one of the legs and turn it into a monopod.  The whole thing was $130, which as far as nice tripods go, isn't too bad!  Unfortunately, the bolt that you attach mount heads to was a different size than I thought, so I still couldn't attach the Polarie FA head directly, and had to use my 3/8-to-1/4 bushing and attach the FA head to the ball mount that came with the tripod.  Soon I'll get all the right screws!

I got everything connected and got the Polarie pointed north, or at least as north as I can tell by aiming my eye through the sight hole on the edge of the Polarie.  The FA knobs help a lot to move it more precisely.  I wanted to test out using my guidescope and camera again, but it was too cold to want to mess with it.  My USB thermometer reported 29 degrees F.  There were a lot of moving parts in this rig - first I needed to make the tripod level, but the bubble level is on the ball mount head, so I had to get that approximately upright first.  Then I had the FA head screwed onto that, and the Polarie screwed onto that, and another ball mount head on the Polarie, whose rotating section is also screwed on with thumbscrews, and then finally the DSLR attached to that.  Sometimes I'd go to adjust the ball mount that the DSLR was on, and would instead accidentally turn one of the thousand other rotatable things that I hadn't screwed down tightly enough.

I got a little rail that attaches to the camera shoe (same size as for gun scopes) and attached the red/green dot sight that came with my Oberwerk binoculars to it.  Since it's perfectly aligned with the binoculars, I didn't want to mess with it, so it's wrong, but I've got another one on the way just for the camera!  Should make aiming it way easier.

While I was adjusting the polar alignment, I saw a bright flash just above, and a slow-ish-moving green-blue meteor splashed across the sky.  Behind it was a sputtering tail of smoke.  Beautiful sight!  Possibly an early Geminid, since it was moving from east to west.

Finally, I got the camera pointed in the vicinity of the California Nebula (there's a string of three bright stars that make that very easy) and set at 100mm.  After a few focusing and test frames, I remembered that I had a much more important target to image -- Comet 46P/Wirtanen!  How could I forget! So I looked up its position in SkySafari, and it was nice and high just underneath Cetus, with a bright star nearby for reference.  I found it easily, although it took about 10 test frames before I was able to get it centered exactly where I wanted.  Finally that was all done, and I took test frames to see how long the Polarie would track for.  Two minutes showed streaks, 1 minute 30 seconds showed streaks too, and finally I got down to a minute and was still getting small star streaks.  So I adjusted the polar alignment a bit, and finally got reasonably small star streaks.  Then I let it run.

By the time this was done, it was a little after 7:30, and the Crab Nebula had cleared the trees up to the 20 degree mark, which is the absolute minimum I'll image at because the atmosphere is too mushy below that.  Since it was still kind of low, I decided to start with the blue filter and work my way backwards, since it's less important that the color channels be really clear, and then it would be nice and high and in the good part of the atmosphere for the luminance frames later.  I calibrated PHD for guiding, and it looked good.  I had set the camera cooler on -40C, which it reached, and everything else looked good to go, so I told it to take 15x180s frames, and I snuck back inside where it was warm.

A little later on, I came out with a pair of handheld binoculars from the equipment room and went hunting for the comet.  It wasn't difficult to find, although it looked more like a splotch than a comet.  If it wasn't so cold, I might have set up a larger pair of binoculars on a tripod, but it was cold!  I hurried back inside, where I had the warm room warmed up to a nice 70 degrees.

I periodically went out to the dome to check on things, and the blue and green filter images looked slightly out of focus.  When I changed to the red filter, I slewed to a nearby star and re-focused - it must have been pushed in a little when I was rotating the filter wheel.  My filter wheel is very stiff, especially when it's cold.  It won't be too long now before I just give up and get an electronic one and 2-inch filters...although I'm going to need to start plugging things into the ZWO's two back USB ports because I'm out of ports on my 4-port USB hub!

Guiding went okay-ish.  The dec axis looked good, but RA was bouncing all over the place.  I thought at first it was the seeing, but since dec looked fine, I wasn't sure.  I zoomed in on the stars, and they were slightly skewed up-down, which I think was along the RA axis.  I wonder if I need to apply some different settings in PHD for the CGX-L mount, since it's belt-driven.  The agressiveness was already turned down to 70, but maybe that's still too high.

9:30 PM rolled around while I was hanging out in the warm room, and I tried to connect to the weekly Astro Imaging Channel show, but the wifi out here is too slow, and even my LTE cell service is too slow, at least inside the building.  Download is workable, but upload is almost nonexistent!  So I missed the call.  Darn!

I'm writing this while I'm out at the observatory the following night, Monday night, so I haven't had a chance to try processing the Crab Nebula image yet.  I was hoping to get more luminance frames tonight, but the clouds that were originally supposed to clear out by 7 PM are still here, and the forecast now keeps pushing that back hour by hour.  I've got my Nikon D3100 set up now pointed east in hopes of seeing the clouds clear, catching some Geminids, and watching Orion rise, but it's not looking hopeful.  I haven't even set up the memorial dome yet, or the Vixen Polarie.  I'm going to stick it out until 9, and if there's no clearing in sight, then I'll go home and get some sleep.

[Update December 15, 2018]
All right, finally got to process the final Crab Nebula image!  My blue and green channels ended up out-of-focus, like I thought, due to pushing the focuser in a bit while I was trying to rotate the filter wheel in those sub-zero temperatures.  But my fix of that for the red and luminance channels helped, and I was able to apply a deconvolution algorithm to the luminance and get some really nice detail back from the not-fantastic guiding.

These are the steps I followed in PixInsight, with guidance from the Light Vortex Astronomy tutorials:
- Used BatchPreprocess to generate master dark, master bias, calibrate & register light frames (linking the different exposure times of the RGB vs the L)
- Stacked each channel with Light Vortex tutorial recommendations
- Combined RGB channels
- Applied DynamicBackgroundExtraction RGB and L images to remove the light pollution background
- Applied PhotometricColorCalibration to properly color-balance the RGB image
- Dust spot removal using the CloneStamp on L and RGB (my attempt to clean the objective of the refractor didn't work very well when it was below freezing - the cleaning fluid didn't want to evaporate, so I had to use a hair dryer!  I think I also need to check my filters for dust)
- Denoising with MultiscaleLinearTransform on L and RGB, with a mask to protect the nebula from being blurred
- Stretched L and RGB images
- Applied the Deconvolution process with a PSF generated from the image (uses the shape of the stars to inform the algorithm)
- Combined LRGB into one image
- Reduced star sizes with MorphologicalTransform with a star mask (needed because of the un-focused G and B channels)
- CurvesTransformation to boost saturation, brightness, and I also used it with a star mask to reduce the green halos that were a result of the un-focused green channel image

And here you go!
Date: 9 December 2018
Object: M1 Crab Nebula
Attempt: 5
Camera: ZWO ASI1600MM Pro
Telescope: Meade 127mm f/9 apo (club's)
Accessories: Astronomik LRGB Type 2c 1.25" filters
Mount: Celestron CGX-L (club's)
Guide scope: Celestron 102mm (club's)
Guide camera: QHY5
Subframes: L: 10x300s
   R: 12x180s
   G: 14x180s
   B: 15x180s
Gain/ISO: 139
Stacking program: PixInsight 1.8.5
Stacking method (lights): Average, winsorized sigma clipping
Post-Processing program: PixInsight 1.8.5
Darks: 30
Biases: 50
Flats: 0
Temperature: -40C (sensor), 29F (ambient)

I'm continually amazed with what I can get out of an image that has sub-standard input data!  This whole thing is only less than 3 hours of total integration time, and I managed to get some reasonable SNR (signal-to-noise ratio) and detail on the nebula.  Now just think if I had enough clear nights (and enough patience) to have much longer total integration times!

Having awesome software helps's the comparison of the pre- and post-deconvolution images!


Comet results are coming takes a long time to process all 164 images for making the movie (calibration, background extraction, noise reduction, color calibration...), and then I'm learning how to use PixInsight's comet alignment process to make a final stacked image.  More coming soon!

[ Update December 30, 2018 ]

Well, I had about another two pages of text and images written in here, but then some dialog box came up that I didn't see when I alt-tabbed back over after checking on something, and I lost all of my work :(  Guess I'll start over...This time I reverted the post to a draft, so that it would save my work...for some reason it doesn't do that if you're just editing an already-published post...but why am I surprised, Blogger is a pretty outdated platform that Google is probably going to kill anyway, along with nearly all the rest of its good ideas (sorry I'm kind of bitter about them killing Inbox!)

I got busy with some work stuff, graduate school applications, and the holidays, so processing the comet image has been slow-going.  I've also had numerous setbacks -- this dataset is proving difficult!  I got the video done weekend before last, so I'll talk about that first.


I wanted to create the video first since several other people were making cool videos, and I figured it would end up being easier than processing the whole image.  There are a few steps that are in common between the two processes, though, so I could at least make some headway on both at the same time.  I've previously used Photoshop to fix a sequence of raw frames into something reasonable for a video, but I decided to give PixInsight a try.  It's an obvious choice since you can apply the same process to multiple images using ImageContainer.

 First, I needed to calibrate the light frames with their corresponding darks and biases.  I had a master dark and a master bias from DeepSkyStacker previously at that temperature, ISO, and exposure time from another image I had processed there previously, so I just decided to use those.  However, after calibrating, debayering, and registering, I checked some of the images, and they came out kind of crazy weird.  I wish I had a screenshot, but I dumped them all into the Recycle Bin and flushed the toilet.  So I started completely over and used PixInsight to generate a master dark and a master superbias, which is better than just a master bias alone, or so I'm told.  (The superbias does some averaging to reduce noise and make it more like you used a lot more frames to create your master bias so you can just record the bias signal and not the frame-to-frame noise).  After re-creating those, I re-calibrated, debayered, and registered all of the frames.  They looked much better.

Next, I opened up the first and last image, and made two DynamicCrop processes, which between the two would cover the furthest extents of the black edges left behind by the registration process.  I used ImageContainer to apply the crops to all images in the sequence.

Next, I opened up DynamicBackgroundExtraction, since the raw DSLR images had a pretty strong green background as a result of there being twice as many green pixels as red and blue ones.  But it didn't really come out with the same-looking result for each frame, and I couldn't figure out how to make it do that, so I gave up and switched back over to Photoshop after running all the files through the BatchFormatConversion script to convert them from XSIF (a PixInsight format) to TIFF.  It wasn't just the processing problems though -- PixInsight was also generating these massive files, so I kept having to delete the images from the previous steps because my poor 128 GB SSD that I process stuff off of just couldn't handle multiple copies of all 94 of the 283 MB behemoths.

In Photoshop, I opened up one of the images, and recorded an action of stretching, cropping, color balancing (by eye), adjusting curves, and and reducing noise, and then I applied that action to all of the images.  Then I added a text layer, made copying that to the image an action, and applied that action to every frame, since I wanted text on the video saying what it was, the date/time, etc.  Finally, I converted all the images to JPG, loaded them into Timelapse Movie Monkey, and generated my comet video.  I posted it to YouTube on December 18th.


With the video done and a 4-day weekend upon me, I could finally take another whack at the complete comet image.  Previously, I used DeepSkyStacker for this, which has a neat comet stacking mode that allows you to do it one of two ways: hold the comet steady and streak the stars, or do two stacks and combine them to get steady comet and stars.  DSS is pretty quick about it and it produces a pretty good image if you choose the right stacking mode, but you have to select the nucleus of the comet in every frame, and who has time for that??  So I decided to see if PixInsight had a better way.  To the Google Machine!  I quickly found a tutorial from the PixInsight folks themselves on the process, located here.  It walks you through how to make a comet image where both the stars and the comet are steady.  First I'll integrate the comet, then the stars, and then combine.

Much to my delight, PixInsight does indeed do it the most awesome and logical way: its CometAlignment routine has you select the comet nucleus in only the first and last frame in your sequence time-wise, and then it registers everything from there.  Beautiful!

I started with the calibrated, debayered, and registered images I had already generated from making the video.  I pulled them into the CometAlignment process and after selecting a reference image (the image that has the comet located where you want it to be in the final image).  But when I went to the first image to select the comet nucleus, the first 24 frames were missing!  I found them at the bottom -- somehow the timestamps had gotten messed up.  Back to the drawing board...

I deleted everything and decided to use BatchPreprocessor this time.  I loaded my lights, master dark, and master superbias, but when I hit Run, it came up with a memory read error, and also an error about a CFA issue with the dark frame.  CFA has to do with the debayering, so I wondered if the dark was debayered while the lights still weren't, or something like that.  So instead, I closed out of that script and just did it myself again.  

Calibration - the reduction in noise is easier to see when zoomed in on the raw file, sorry!
Also, it's monochrome because it hasn't been debayered yet.

When I went to debayer the frames with the Debayer process, PixInsight kept outright crashing, like totally shutting down.  I had just installed the new version, 1.8.6, and was regretting it already!  But then I remembered that the Light Vortex tutorials advised DSLR users to change the RAW format setting to "pure raw," so I did that, and it finally was able to process and not crash.  How to do that can be found in part 1 of this Light Vortex Astronomy tutorial.  

Debayered single frame -- now in color!

I used the Blink process to inspect the registered frames after registration.  I can only load about 35 or so at a time, since they fill up my RAM rather quickly (that and Google Chrome!)

After I re-calibrated, re-debayered, and re-registered all 94 frames again, and this time the timestamps were correct.  I selected the comet nucleus in the first and last frames, and then it crunch through aligning the frames on the comet.  

The frames are now aligned in the comet.

With the images aligned on the comet, it was time for integration (stacking).  I used the ImageIntegration process, which gave me just the comet and not the stars, since the stars change position every frame since we're comet-aligned, and the integration process is designed to reject parts of the image that change from frame-to-frame (such as noise and satellites).  

Comet only.  It still has a high background, which we'll be killing soon.  The two dark spots aren't actually dust -- it's a bug on the lens that changed positions partway through!

Next, I went back to CometAlignment but this time, I used it to subtract the stacked comet image from the single frames to get just the stars.

Next came stacking all of the stars-only images.

After that, I cropped out the darkened edges that were a result of registration, cropping both the stars-only and comet-only images the same using DynamicCrop.  I had both images open, but cropped only one of them (the comet image has deeper edges than the stars one, so it would cover both), then I dragged the New Instance icon into the workspace, applied the process to the comet image, and then clicked and dragged the icon onto the stars image to crop it the same.  

Then it was time to work on the backgrounds.  The PixInsight tutorial leads you through two rounds of DynamicBackgroundExtraction -- the first one to reduce gradients, and the second one to further suppress the dim, streaked stars in the background of the comet-only image leftover from the stacking process.  The first DBE was easy, especially with the absence of stars, which meant I didn't have to check every sample point to make sure it wasn't on a star, I just needed to clear out the ones around the comet.  But after I applied it, the image was still green -- greener, actually!  

I compared settings with the PixInsight tutorial, and they were the same.  But then I checked with the Light Vortex tutorial, and saw that I actually needed to have "Normalize" un-ticked.  After I did that, the background was finally actually subtracted.  

The next step was to do a high-density background subtraction to eliminate the leftover stars in the comet-only image.  For this, you want a lot of sample points.  The PixInsight tutorial author's computer could only handle 41 samples per row, so he split his image into three parts for processing.  My desktop is pretty beef-tacular, so I figured it could handle the full 200.  It didn't have much trouble plotting those points or rendering them, although it was a tad slow navigating the image.  I zoomed in and deleted samples around the comet nucleus, since we don't want to kill that.

I hit go and waited.  And waited.  And typed up this update on the blog post.  And waited.  Finally after about an hour, the console window appeared, and it started calculating the 2D surface splines on the first color channel.  It got to 92% and then stopped for a half hour!  I finally killed PixInsight and started it again.  This time I'll just wait.  It might need to be overnight, we'll see.  The first part, which I think is when it's analyzing the sample points, only uses one core of my 4-core hyperthreaded Intel i7 processor ('s time for an update...), but that one core is still at 4.0 GHz.  Next time, when I finally get myself a water cooler, remind me to overclock it!  The next part though -- calculating the splines -- was using almost all of my processing power, which made it hard to use the computer!  

It finally finished, at least two hours later but possibly as long as four, since I stepped away from the desk to make dinner...and I forgot to actually have the correction subtract!  😭😭 I tried subtracting it myself with PixelMath, but it looked terrible.  So I let it run again...

Here's the background:

Wow that is cool!  So much hidden beneath the surface.

[Update December 31, 2018]

I let it run overnight, and here's the result!

Wait that looks the same????  😐 Maybe the stars are deeper in the background now...screen transfer function can be deceptive since it's automatic, so it might be digging deeper into the background.  Guess I'll just keep processing and see what happens!

Next, I went back over to the stars-only image, and realized that I hadn't saved the cropped and DBE'd one I'd made earlier, so I would just have to re-crop the stars image and guess.  Luckily, it doesn't have to be perfect, since the comet is moving anyway.  Hopefully it's close enough.  Then I re-DBE'd the stars-only image.

Then it was time to add the stars and comet images together with PixelMath.  

Well...okay yeah guessing on the crop isn't going to work here either since we still have the ghost stars in the background of the comet image.  *sigh*  All right, time for round three...

I went back and opened up the stars-only and comet-only stacks, cropped them the same with DynamicCrop (and saved those images), and then did a DynamicBackgroundExtraction on the stars-only image (and saved that image as well as the DBE process), and then did the wide DBE on the comet-only image.  For interest's sake, here's the background it extracted from the comet-only image.

Background that was subtracted from the comet-only image.

Now it's time for another high-density DBE to hopefully try and kill the leftover stars in the comet image.  Since I still had stars using the parameters in the PixInsight tutorial, I cut out a small preview window from the whole image so that I could mess with the parameters and see what worked.  With the small preview window, it only took a few seconds to generate the DBE'd image.  

I played around with Tolerance, Shadows Relaxation, and Smoothing Factor in the Model Parameters (1) section, but they all looked more or less the same.  The backgrounds that were subtracted varied a bit in granularity, but the outcome looked pretty equivalent.  I just got the new edition of Warren Keller's Inside PixInsight, which I'm hoping will offer some insight into the parameters of these processes and what they're really doing.  Now that I'm starting to get a good feel for how they work, I want to actually know what they're doing!  Then I can tune the parameters more smartly (is that a word??)

After doing some comparisons, it looked like the PixInsight tutorial's values would reduce the background stars the best, so I might just have to use other means to reduce them further later on.  I went back to the main image, set the settings, and then went and worked on other stuff.

About three hours later, it was finished.

Then I could add the stars and comet images together with PixelMath!

The tutorial instructs doing a deconvolution next with a star mask, so I made one from the stars-only image.

Normally I do this to increase sharpness on a deep sky object and I make a model star, but I figured I'd roll with it and see where it went.

Wow, it was terrible.

I made the stars bigger in the star mask by applying a MorphologicalTransform twice:

It didn't help.  In fact, I think it made these worse!  So I skipped over that part.

Next was helping to reduce the background light with BackgroundNeutralization.  You need to make a preview window that doesn't have any stars so that PixInsight knows what is truly background.  I used the same preview window for running ColorCalibration as well.  

The initial image out of BackgroundNeutralization did appear darker, until I re-applied the ScreenTrasnferFunction, which made it look the same again.  But if I clone it and undo the process (and not re-apply the STF), you can see the difference.

Next came the ColorCalibration, using a new preview window (because a) I deleted the old image with the preview without thinking, and b) shouldn't the background reference be on the now-neutralized image?)

The color of the comet became slightly greener, but that was about it -- DBE can sometimes do a pretty good job of color balancing itself.

I decided it would be a good time to get rid of those little bug spots, so I used CloneStamp.  I hadn't re-done that after starting over, since I figured it'd be better to do it on the combined image anyway.


I got tired of looking at the noise, and the next step in the tutorial was stretching, so I went ahead and applied MultiscaleLinearTransform, which is amazing.  Since there's not really any detail with the comet, I didn't bother to make a mask to protect it like I would any DSO I wanted detail on.  I have a copy of the process saved in my astrophotography folder with settings recommended by the Light Vortex Astronomy tutorial so that I don't have to change the settings over and over again -- very handy!

Woo hoo!  Much cleaner.

Since the stars were kind of green (ColorCalibration never seems to work in my favor), I decided to try my favorite color calibration tool -- PhotometricColorCalibration.  I input the RA and Dec coordinates for where the comet was in the reference frame that night, my focal length and pixel size, and let it run.  The result (even after re-applying the STF) was...waaaaaay off.

Un-did that!!

Next, it was time to stretch.  I hoped I could dark the background enough to hide the streaked stars.  I decided to try the MaskedStretch that the PixInsight tutorial does, using the same settings.  

Mm, gonna give that one a thumbs-down.  Lemme just do this myself.

I really had to clip it deep in the blacks to hide the streaked stars and the halos around the bug spots.

Ha-rumph.  I decided to roll with it, but had another idea brewing...

Next was fixing the green cast with CurvesTransformation.  I like adjusting curves, in both PixInsight and Photoshop -- it's your one-stop shop for color correction, selective brightening/darkening, and saturation adjustment.

Ugh, this was just not working out.  Time for plan B!

So, the comet is bright, and the stars are bright.  The rest is not.  Let's make a mask!  I backed up a few steps to pre-stretching the image.  Applying a STF made it look terrible, even after the MultiscaleLinearTransform.  This dataset is really a mess.

I went to the RangeSelection process, turned on Real-Time Preview, and set the lower limit to reveal the comet.  It was lower than I thought, and it was a very fine line between revealing the comet and revealing the entire image.  I found a value that looked good and hit Apply.  It came out dimmer than the preview, so I stretched it.  It happened to catch most of the stars too, so I didn't need to also make a star mask to add to it.

Now I could apply the mask either for protecting the background or protecting the stars and comet by inverting it.  

Red areas are protected.

You'll notice that the outer halo of the comet is not protected here - it's too close to the background noise, sorry!

So first I had the mask protect the background, and I stretched just the comet and stars.  Then I inverted the mask and just did a tiny stretch on the background so I could keep it dim but not so dark as to look fake.  Much better result!

I think the left corner got a little left out of the mask, but it's too large to crop out.  I'll just leave it, this image is terrible anyway!

Finally, I fixed the colors using CurvesTransformation, and boosted the saturation.  And then I called it quits!!

Date: 9 December 2018
Object: Comet 46P/Wirtanen
RA/Dec at Reference Frame: 03h 13m 16.4s, -00 deg 02' 58.8"
Attempt: 1
Camera: Nikon D5300
Telescope: Nikon 18-55mm lens @ 100mm, f/4.8
Accessories: N/A
Mount: Vixen Polarie
Guide scope: N/A
Guide camera: N/A
Subframes: 94x60s (2h44m)
Gain/ISO: ISO-1600
Stacking program: PixInsight 1.8.5
Stacking method (lights): Average, Winsorized Sigma Clip
Post-Processing program: PixInsight 1.8.5
Darks: 72 (30F)
Biases: 20 (28F)
Flats: 0
Temperature: 29F

In its final battle cry, the TIFF saved out with like additional stretching or something, and Lightroom only lets me import TIFFs and not JPGs for some reason, so I couldn't watermark it.  I'm not happy with it anyway, so who cares!  Haha.

As miraculous as image processing is, sometimes bad data just can't be helped!!  Worth a shot though!