Amazon AMIs

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Amazon AMIs

Matthew Seaman-5
Hi,

I've been playing with setting up a very simple website on AWS to do
speedtesting for the ISP I work for[*].  Naturally I used the FreeBSD
12.0 AMIs.  The result with the default m4.large instance was actually
pretty dissapointing:

speedtest:~:% iperf3 -p 443 -P 3 -c test1.lightspeed.gigaclear.com
Connecting to host test1.lightspeed.gigaclear.com, port 443
[  4] local 46.227.144.15 port 38926 connected to 3.8.245.243 port 443
[  6] local 46.227.144.15 port 38928 connected to 3.8.245.243 port 443
[  8] local 46.227.144.15 port 38930 connected to 3.8.245.243 port 443
[...]
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   177 MBytes   148 Mbits/sec  123             sender
[  4]   0.00-10.00  sec   176 MBytes   147 Mbits/sec
receiver
[  6]   0.00-10.00  sec   281 MBytes   236 Mbits/sec  155             sender
[  6]   0.00-10.00  sec   280 MBytes   235 Mbits/sec
receiver
[  8]   0.00-10.00  sec   206 MBytes   173 Mbits/sec  118             sender
[  8]   0.00-10.00  sec   205 MBytes   172 Mbits/sec
receiver
[SUM]   0.00-10.00  sec   664 MBytes   557 Mbits/sec  396             sender
[SUM]   0.00-10.00  sec   661 MBytes   554 Mbits/sec
receiver

Couldn't even saturate a 1G link.

So I tried one of the newer m5.large instance instead.  As well as being
rather newer and better integrated with FreeBSD (m5 has if_ena
interfaces, nvd disk devices rather than m4 which has ixv and xbd
disguised as ada for the root device but not for additional drives),
they're actually slightly cheaper for the same nominal CPU count, RAM
and disk:

speedtest:~:% iperf3 -p 443 -P 3 -c test0.lightspeed.gigaclear.com
Connecting to host test0.lightspeed.gigaclear.com, port 443
[  4] local 46.227.144.15 port 54264 connected to 18.130.169.5 port 443
[  6] local 46.227.144.15 port 54266 connected to 18.130.169.5 port 443
[  8] local 46.227.144.15 port 54268 connected to 18.130.169.5 port 443
[...]
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.90 GBytes  1.63 Gbits/sec  2484
sender
[  4]   0.00-10.00  sec  1.89 GBytes  1.63 Gbits/sec
receiver
[  6]   0.00-10.00  sec  1.87 GBytes  1.60 Gbits/sec  3381
sender
[  6]   0.00-10.00  sec  1.86 GBytes  1.60 Gbits/sec
receiver
[  8]   0.00-10.00  sec  1.90 GBytes  1.64 Gbits/sec  3607
sender
[  8]   0.00-10.00  sec  1.90 GBytes  1.63 Gbits/sec
receiver
[SUM]   0.00-10.00  sec  5.67 GBytes  4.87 Gbits/sec  9472
sender
[SUM]   0.00-10.00  sec  5.65 GBytes  4.86 Gbits/sec
receiver

So, about 2Gb/s with an out-of-the-box configuration and no tuning.

Question:  Why is m4.large the recommended instance type?  Surely we'd
be better served and present users with a better experience by
recommending an m5 instance as one of the more modern and higher
performance types?

        Cheers,

        Matthew


[*] It's an OfCom requirement here in the UK.  If what we sell is
described as a 940Mb/s pure fibre connection, then by golly it should be
capable of pulling down 940Mb/s even at peak usage times of day[+].  So
we need to measure this regularly, which means we need to roll out a
bunch of small devices to sit in customer premises and run automated
tests downloading large blobs of random data from a website "not on our
own network."

[+] We can't count packet headers as part of the delivered bandwidth, or
this would just be a 1Gb/s service.
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Amazon AMIs

Colin Percival-4
On 2/20/19 3:00 AM, Matthew Seaman wrote:
> Question:  Why is m4.large the recommended instance type?  Surely we'd be
> better served and present users with a better experience by recommending an m5
> instance as one of the more modern and higher performance types?

Last time I looked at this, we weren't handling hotplug/hotunplug of "NVMe"
disks properly on the m5/c5/etc. instances.  I opted to recommend the instance
which completely works rather than the one with slightly better performance...

--
Colin Percival
Security Officer Emeritus, FreeBSD | The power to serve
Founder, Tarsnap | www.tarsnap.com | Online backups for the truly paranoid
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Amazon AMIs

Alex Dupre
Colin Percival wrote:
> On 2/20/19 3:00 AM, Matthew Seaman wrote:
>> Question:  Why is m4.large the recommended instance type?  Surely we'd be
>> better served and present users with a better experience by recommending an m5
>> instance as one of the more modern and higher performance types?
>
> Last time I looked at this, we weren't handling hotplug/hotunplug of "NVMe"
> disks properly on the m5/c5/etc. instances.  I opted to recommend the instance
> which completely works rather than the one with slightly better performance...

It does happen only on a few instances, but I get some freezes on new t3
machines: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235856

They are indeed cheaper and more performant, but not 100% reliable in
every workload.

--
Alex Dupre

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Amazon AMIs

Jonathan Anderson-2
On 20 Feb 2019, at 18:50, Alex Dupre wrote:

> Colin Percival wrote:
>> Last time I looked at this, we weren't handling hotplug/hotunplug of
>> "NVMe"
>> disks properly on the m5/c5/etc. instances.  I opted to recommend the
>> instance
>> which completely works rather than the one with slightly better
>> performance...
>
> It does happen only on a few instances, but I get some freezes on new
> t3
> machines: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235856
>
> They are indeed cheaper and more performant, but not 100% reliable in
> every workload.

https://www.xkcd.com/937 ?

:)


Jon
--
Jonathan Anderson
[hidden email]
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Amazon AMIs

Matthew Seaman-5
On 20/02/2019 22:32, Jonathan Anderson wrote:

> On 20 Feb 2019, at 18:50, Alex Dupre wrote:
>
>> Colin Percival wrote:
>>> Last time I looked at this, we weren't handling hotplug/hotunplug of
>>> "NVMe"
>>> disks properly on the m5/c5/etc. instances.  I opted to recommend the
>>> instance
>>> which completely works rather than the one with slightly better
>>> performance...
>>
>> It does happen only on a few instances, but I get some freezes on new t3
>> machines: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235856
>>
>> They are indeed cheaper and more performant, but not 100% reliable in
>> every workload.
>
> https://www.xkcd.com/937 ?
>
Yes, indeed.  That's a very good reason not to recommend the newest and
shiniest.  I haven't seen any problems so far, but then again it's only
been a day or so and we haven't got into the full testing regime quite
yet.  I'll let you know if we do run into problems.

Is there work on hot-plug NVMe going on?  ISTR jmg@ mentioning hotplug
PCI at the dev summit at Stockholm EuroBSDCon, but not much since then.

        Cheers,

        Matthew



signature.asc (981 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Amazon AMIs

Warner Losh
On Thu, Feb 21, 2019 at 1:02 AM Matthew Seaman <[hidden email]> wrote:

> On 20/02/2019 22:32, Jonathan Anderson wrote:
> > On 20 Feb 2019, at 18:50, Alex Dupre wrote:
> >
> >> Colin Percival wrote:
> >>> Last time I looked at this, we weren't handling hotplug/hotunplug of
> >>> "NVMe"
> >>> disks properly on the m5/c5/etc. instances.  I opted to recommend the
> >>> instance
> >>> which completely works rather than the one with slightly better
> >>> performance...
> >>
> >> It does happen only on a few instances, but I get some freezes on new t3
> >> machines: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235856
> >>
> >> They are indeed cheaper and more performant, but not 100% reliable in
> >> every workload.
> >
> > https://www.xkcd.com/937 ?
> >
>
> Yes, indeed.  That's a very good reason not to recommend the newest and
> shiniest.  I haven't seen any problems so far, but then again it's only
> been a day or so and we haven't got into the full testing regime quite
> yet.  I'll let you know if we do run into problems.
>
> Is there work on hot-plug NVMe going on?  ISTR jmg@ mentioning hotplug
> PCI at the dev summit at Stockholm EuroBSDCon, but not much since then.
>

It's hot-unplug that doesn't work quite right. Hotplug works, I believe, if
you have PCI_HP in your kernel, I believe.

What's needed is about a solid week of cleanup and testing in this area,
however.

Warner
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[hidden email]"