Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

freebsd-ppc mailing list
The last of the 4-core PowerMac G5s that I have access to now shuts
down for "CPU B0 DIODE TEMP" that "exceeds critical temperature
(90.0 C)" when I try to rebuild/update ports or such. The other
4-core G5 failed for such reasons in similar contexts a few months
ago.,Interestingly, the two G5s have very different liquid cooling
systems despite the similar time frame for the failures.

Without the faster G5s, I may just use cross-built world/kernel
material and see if there is a tolerable but minimal set of ports
for supporting boot testing/dump inspection and basic operation of
the slower 2-socket/1-core-each and 1-socket/1-core-each PowerMacs
that I have access to, avoid things like building devel/llvm*
ports that take so long. (I have fairly strong time preferences.)

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

freebsd-ppc mailing list


On 2021-Jan-11, at 17:42, Mark Millard <marklmi at yahoo.com> wrote:

> The last of the 4-core PowerMac G5s that I have access to now shuts
> down for "CPU B0 DIODE TEMP" that "exceeds critical temperature
> (90.0 C)" when I try to rebuild/update ports or such. The other
> 4-core G5 failed for such reasons in similar contexts a few months
> ago.,Interestingly, the two G5s have very different liquid cooling
> systems despite the similar time frame for the failures.
>
> Without the faster G5s, I may just use cross-built world/kernel
> material and see if there is a tolerable but minimal set of ports
> for supporting boot testing/dump inspection and basic operation of
> the slower 2-socket/1-core-each and 1-socket/1-core-each PowerMacs
> that I have access to, avoid things like building devel/llvm*
> ports that take so long. (I have fairly strong time preferences.)

I've done some more testing and, while use as a (full load/speed)
builder machine is a no-go, it looks like this 4-core G5 can
still be used for boot testing and basic operation without
overheating. The prior failing machine overheated more easily
but might have a similar status if I test it just for such use.

How long the recently failed G5 will be useful for boot and basic
operation testing, I do not know. But probably longer than for
the originally-failing G5.


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

freebsd-ppc mailing list
On 1/12/21 4:38 AM, Mark Millard via freebsd-ppc wrote:

>
>
> On 2021-Jan-11, at 17:42, Mark Millard <marklmi at yahoo.com> wrote:
>
>> The last of the 4-core PowerMac G5s that I have access to now shuts
>> down for "CPU B0 DIODE TEMP" that "exceeds critical temperature
>> (90.0 C)" when I try to rebuild/update ports or such. The other
>> 4-core G5 failed for such reasons in similar contexts a few months
>> ago.,Interestingly, the two G5s have very different liquid cooling
>> systems despite the similar time frame for the failures.
>>
>> Without the faster G5s, I may just use cross-built world/kernel
>> material and see if there is a tolerable but minimal set of ports
>> for supporting boot testing/dump inspection and basic operation of
>> the slower 2-socket/1-core-each and 1-socket/1-core-each PowerMacs
>> that I have access to, avoid things like building devel/llvm*
>> ports that take so long. (I have fairly strong time preferences.)
>
> I've done some more testing and, while use as a (full load/speed)
> builder machine is a no-go, it looks like this 4-core G5 can
> still be used for boot testing and basic operation without
> overheating. The prior failing machine overheated more easily
> but might have a similar status if I test it just for such use.
>
> How long the recently failed G5 will be useful for boot and basic
> operation testing, I do not know. But probably longer than for
> the originally-failing G5.
>


The shipping cost of a replacement unit would be more money than the
entire machine itself.  I have four of these units where I keep one
of them running neatly. I think I have half a dozen more somewhere
in the back of a warehouse in Toronto.  Back in 2010 ( or earlier )
they were fantastic CPU powerhouse units running some math crunch
daemons that I wrote. Also, hard to believe but it was more cost
effective than running the big SPARC64 units that I had at the
same time. Ever tried to run Sun/Oracle M4000 or M5000 machines?

In any case please drop me a line off-list and let's see if we can
figure out a way to get a workable unit into your hands.


--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken
GreyBeard and suspenders optional
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

Rob Ballantyne
I've got two of these units.  My plan is to try and keep one running.
However, I haven't started the running one for over a year.

I don't know if everyone has seen: http://archive.is/v6ziF. It's an archive
of the cooling system rebuild that someone once did.  I attempted it on one
of my units, unfortunately the first attempt was unsuccessful.

Regards,

Rob

On Tue, Jan 12, 2021 at 8:43 AM Dennis Clarke via freebsd-ppc <
[hidden email]> wrote:

> On 1/12/21 4:38 AM, Mark Millard via freebsd-ppc wrote:
> >
> >
> > On 2021-Jan-11, at 17:42, Mark Millard <marklmi at yahoo.com> wrote:
> >
> >> The last of the 4-core PowerMac G5s that I have access to now shuts
> >> down for "CPU B0 DIODE TEMP" that "exceeds critical temperature
> >> (90.0 C)" when I try to rebuild/update ports or such. The other
> >> 4-core G5 failed for such reasons in similar contexts a few months
> >> ago.,Interestingly, the two G5s have very different liquid cooling
> >> systems despite the similar time frame for the failures.
> >>
> >> Without the faster G5s, I may just use cross-built world/kernel
> >> material and see if there is a tolerable but minimal set of ports
> >> for supporting boot testing/dump inspection and basic operation of
> >> the slower 2-socket/1-core-each and 1-socket/1-core-each PowerMacs
> >> that I have access to, avoid things like building devel/llvm*
> >> ports that take so long. (I have fairly strong time preferences.)
> >
> > I've done some more testing and, while use as a (full load/speed)
> > builder machine is a no-go, it looks like this 4-core G5 can
> > still be used for boot testing and basic operation without
> > overheating. The prior failing machine overheated more easily
> > but might have a similar status if I test it just for such use.
> >
> > How long the recently failed G5 will be useful for boot and basic
> > operation testing, I do not know. But probably longer than for
> > the originally-failing G5.
> >
>
>
> The shipping cost of a replacement unit would be more money than the
> entire machine itself.  I have four of these units where I keep one
> of them running neatly. I think I have half a dozen more somewhere
> in the back of a warehouse in Toronto.  Back in 2010 ( or earlier )
> they were fantastic CPU powerhouse units running some math crunch
> daemons that I wrote. Also, hard to believe but it was more cost
> effective than running the big SPARC64 units that I had at the
> same time. Ever tried to run Sun/Oracle M4000 or M5000 machines?
>
> In any case please drop me a line off-list and let's see if we can
> figure out a way to get a workable unit into your hands.
>
>
> --
> Dennis Clarke
> RISC-V/SPARC/PPC/ARM/CISC
> UNIX and Linux spoken
> GreyBeard and suspenders optional
> _______________________________________________
> [hidden email] mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
> To unsubscribe, send any mail to "[hidden email]"
>
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

Julio Merino
In reply to this post by freebsd-ppc mailing list
On Tue, Jan 12, 2021 at 8:43 AM Dennis Clarke via freebsd-ppc <
[hidden email]> wrote:

> The shipping cost of a replacement unit would be more money than the
> entire machine itself.


If "unit" here is a G5, I've recently had one shipped with FedEx Home
across the USA for less than $50, which I found to be surprisingly cheap.
Other delivery methods wanted $100+. I have no idea if any of these apply
to you.

--
jmmv.dev
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

freebsd-ppc mailing list
In reply to this post by freebsd-ppc mailing list
[Off list.]

On 2021-Jan-12, at 08:42, Dennis Clarke via freebsd-ppc <freebsd-ppc at freebsd.org> wrote:

> On 1/12/21 4:38 AM, Mark Millard via freebsd-ppc wrote:
>>
>>
>> On 2021-Jan-11, at 17:42, Mark Millard <marklmi at yahoo.com> wrote:
>>
>>> . . .
>>
>> . . .
>>
>
>
> The shipping cost of a replacement unit would be more money than the
> entire machine itself.

Crossing an international border probably would not help in that
respect.

I'm in the Portland, Oregon, USA area. But I do think that for
such PowerMacs to continue to have FreeBSD Tier 2 support for
any significant time, Brandon Bergren or Justin Hibbits or
Nathan Whitehorn or such would be the better target for the
machine (unless they all refuse). If they all were to refuse,
that itself would indicate a rather limited support, even
relative to tier 2 status.

Even if the high end G5's had not failed, I'd been worried
about them (and other PowerMacs) staying viable, just from
FreeBSD progressing beyond G5 support (and such). For
example (common to G5/G4):

WARNING: Device "openfirm" is Giant locked and may be deleted before FreeBSD 13.0.
WARNING: Device "kbd" is Giant locked and may be deleted before FreeBSD 13.0.

I've not seen any evidence that anyone intends to work on
avoiding those uses of the Giant lock. There is also, on some
G4's that I have access to,

WARNING: Device "agp" is Giant locked and may be deleted before FreeBSD 13.0.
WARNING: Device "consolectl" is Giant locked and may be deleted before FreeBSD 13.0.

Such warnings are from post-git builds.

The "Code slush" started on 2021-Jan-08. stable/13 is scheduled
to branch on 2021-Jan-22.


> I have four of these units where I keep one
> of them running neatly. I think I have half a dozen more somewhere
> in the back of a warehouse in Toronto.

Hmm. The international border issue would be involved if I was
the receiver of such.

> Back in 2010 ( or earlier )
> they were fantastic CPU powerhouse units running some math crunch
> daemons that I wrote. Also, hard to believe but it was more cost
> effective than running the big SPARC64 units that I had at the
> same time. Ever tried to run Sun/Oracle M4000 or M5000 machines?

No Sun/Oracle M4000 or M5000 machines. The only SPARC context
was working someplace that used SPARCstations. (I do not remember
model numbers or such.)

> In any case please drop me a line off-list and let's see if we can
> figure out a way to get a workable unit into your hands.

I would encourage finding out if Nathan, Brandon, or Justin would
accept a quad G5 first and go with that if one of them would
accept one. (I've no clue how to pick if more than one said yes.)


FYI:

I have SSD media, ECC RAM and non-ECC RAM, and video cards for
these type of machines if I end up with the potential machine.


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

Torfinn Ingolfsen-5
In reply to this post by freebsd-ppc mailing list
On Mon, 11 Jan 2021 17:42:09 -0800
Mark Millard via freebsd-ppc <[hidden email]> wrote:

> The last of the 4-core PowerMac G5s that I have access to now shuts
> down for "CPU B0 DIODE TEMP" that "exceeds critical temperature
> (90.0 C)" when I try to rebuild/update ports or such. The other
> 4-core G5 failed for such reasons in similar contexts a few months
> ago.,Interestingly, the two G5s have very different liquid cooling
> systems despite the similar time frame for the failures.

For conventional (ie. non-liquid cooling) systems it often helps to take off the coolers, remove old thermal paste, and apply new paste.
I haven't done this on a liquid-cooled G5, so I don't know how easy or hard it is.

If one is inclined to tinker with hardware.
--
Torfinn Ingolfsen <[hidden email]>
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

Brandon Bergren-3
In reply to this post by freebsd-ppc mailing list
On Tue, Jan 12, 2021, at 2:54 PM, Mark Millard via freebsd-ppc wrote:
> WARNING: Device "openfirm" is Giant locked and may be deleted before
> FreeBSD 13.0.

I am planning on getting around to this one at some point.

> WARNING: Device "kbd" is Giant locked and may be deleted before FreeBSD
> 13.0.

This one isn't just powerpc, I don't believe.

> I've not seen any evidence that anyone intends to work on
> avoiding those uses of the Giant lock. There is also, on some
> G4's that I have access to,
>
> WARNING: Device "agp" is Giant locked and may be deleted before FreeBSD
> 13.0.

There is some pressure to remove AGP entirely and rely on PCI mappings instead (which are slower IIRC but may possibly still have decent performance with modern object management techniques that the drivers have. Will need testing.)

AGP is kind of a weird case because of what it is and how it works, and how it interacts with everything else.

> WARNING: Device "consolectl" is Giant locked and may be deleted before
> FreeBSD 13.0.

Might need to migrate some bits over to vt at some point.

--
  Brandon Bergren
  [hidden email]
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

freebsd-ppc mailing list
On 2021-Jan-13, at 11:00, Brandon Bergren <bdragon at FreeBSD.org> wrote:

> On Tue, Jan 12, 2021, at 2:54 PM, Mark Millard via freebsd-ppc wrote:
>> WARNING: Device "openfirm" is Giant locked and may be deleted before
>> FreeBSD 13.0.
>
> I am planning on getting around to this one at some point.

Good to know.

>> WARNING: Device "kbd" is Giant locked and may be deleted before FreeBSD
>> 13.0.
>
> This one isn't just powerpc, I don't believe.

Okay. I do not remember seeing the notice for the amd64, aaarch64, or armv7
example systems that I have access to. (But I've not rechecked to confirm.)

>> I've not seen any evidence that anyone intends to work on
>> avoiding those uses of the Giant lock. There is also, on some
>> G4's that I have access to,
>>
>> WARNING: Device "agp" is Giant locked and may be deleted before FreeBSD
>> 13.0.
>
> There is some pressure to remove AGP entirely and rely on PCI mappings instead (which are slower IIRC but may possibly still have decent performance with modern object management techniques that the drivers have. Will need testing.)
>
> AGP is kind of a weird case because of what it is and how it works, and how it interacts with everything else.

Interesting.

>> WARNING: Device "consolectl" is Giant locked and may be deleted before
>> FreeBSD 13.0.
>
> Might need to migrate some bits over to vt at some point.
>

FYI for vt vs sc:

My memory is that the 2-socket/1-core-each G5 that I have access
to dies a very early boot-failure for vt and I have to use sc on
it. (It is possible I've got things backwards. I've not validated
the status in some time, using things just in the working form.)

As I remember there used to be some other G5/video card combination
were I also had the reverse for what failed vs what worked, although
the boot-failure details were different if I remember right. (Much
larger pixel-count display involved in the boot-failure case?)



Given the slush and the scheduled stable/13 branch in a little over
a week, I did not know if Giant dependency was a "support stops
here" type of issue for 13 or not.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Heat-death of the last of the 2-socket/2-cores-each PowerMac G5s that I have access to

freebsd-ppc mailing list
In reply to this post by freebsd-ppc mailing list
On 2021-Jan-11, at 20:38, Mark Millard <marklmi at yahoo.com> wrote:

> On 2021-Jan-11, at 17:42, Mark Millard <marklmi at yahoo.com> wrote:
>
>> The last of the 4-core PowerMac G5s that I have access to now shuts
>> down for "CPU B0 DIODE TEMP" that "exceeds critical temperature
>> (90.0 C)" when I try to rebuild/update ports or such. The other
>> 4-core G5 failed for such reasons in similar contexts a few months
>> ago.,Interestingly, the two G5s have very different liquid cooling
>> systems despite the similar time frame for the failures.
>>
>> Without the faster G5s, I may just use cross-built world/kernel
>> material and see if there is a tolerable but minimal set of ports
>> for supporting boot testing/dump inspection and basic operation of
>> the slower 2-socket/1-core-each and 1-socket/1-core-each PowerMacs
>> that I have access to, avoid things like building devel/llvm*
>> ports that take so long. (I have fairly strong time preferences.)
>
> I've done some more testing and, while use as a (full load/speed)
> builder machine is a no-go, it looks like this 4-core G5 can
> still be used for boot testing and basic operation without
> overheating. The prior failing machine overheated more easily
> but might have a similar status if I test it just for such use.
>
> How long the recently failed G5 will be useful for boot and basic
> operation testing, I do not know. But probably longer than for
> the originally-failing G5.


Looks like the problem has progressed quickly, so booting
without instead overheating is now unlikely. It does not
appear that I'll be able to provide any testing of
2-socket/2-core-each G5 contexts any more.



As for the 2-socket/1-core-each G5, care to guess which goes
with which machine, G5 vs. Rock64 (Cortex-A53, not RockPro64),
allowing as many cpus to be used for the job as the executing
machine has (2 vs. 4):

[00:15:31] [01] [00:10:57] Finished ports-mgmt/pkg | pkg-1.15.10: Success
vs.:
[00:12:27] [01] [00:10:35] Finished ports-mgmt/pkg | pkg-1.15.10: Success

Yep, the Rock64 configuration that I use takes about the same
time to build pkg as the G5 does, although the Rock64 is a little
faster than the G5 for that activity.

(pkg builds by itself, so there is no competing job in the
above.)

The RPi4B configuration (Cortex-A72), MACCHIATObin Double Shot
configuration (Cortex-A72), and the OverDrive 1000 configuration
(Cortex-A57) that I use are all faster than the
2-socket/1-core-each G5 for doing self-hosted, parallel builds.
All 3 are faster than the Rock64 for such activity. The
OverDrive 1000 is the fastest of these machines at doing
parallel builds, apparently largely due to RAM caching
differences and other memory subsystem distinctions. (The cpu
clock rate is slower than the A72 configurations are using.)

(For doing aarch64 and armv7 port builds, I generally build
on the OverDrive and the MACCHIATObin. Of the configurations
reported on above, the MACCHIATObin one is the 2nd fastest for
parallel builds.)

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ppc
To unsubscribe, send any mail to "[hidden email]"